Ubuntu Server Project #9: SFTP, Fake DNS, and Apache SSL

Things are moving along with the Ubuntu Server Project but there’s a bunch of small tasks and configuration changes that will make life easier going forward. This article will cover installing vsftpd, setting up a self-signed SSL certificate in Apache, and configuring my local Mac to access the Ubuntu server virtual machine by name. Even though the server is a VM sitting on my Mac and not accessible from the Internet I’ll still be treating it as if it was on the Internet and needs to be secure.


The installation of vsftpd is simple using aptitude: sudo aptitude install vsftpd

Since I don’t plan to use regular ftp, just sftp, I don’t have to make any changes in the iptables firewall settings. SSL connections are already allowed through and I still want to block regular ftp connections. I also want to limit connections to just the users that are set up on the server.

I fire up Transmit (my FTP client) and set up a connection with the following settings:

Server:  (the IP address of the VM)

User Name/Password: I leave these fields blank because I’m using SSL certificates from this Mac

Port: 22222  (the SSL port I configured)

Protocol: SFTP

I don’t set up any default remote path. I connect in and it defaults to the home directory of my Ubuntu ID. I try a regular ftp connection, as expected it fails due to the firewall. Even though I’m good to go I’m going to go through the vsftpd configuration file and make some changes as if this was a live server. I load the file into the nano editor (sudo nano /etc/vsftpd.conf) and scroll down the file. It’s well commented, although it doesn’t contain all the configuration options.

I turn off anonymous ftp by changing anonymous_enable to anonymous_enable=NO.

At the end of the file I add ssl_enable=YES to explicitly turn on SSL. Even though they are documented as the default settings I also add force_local_data_ssl=YES and force_local_logins_ssl=YES to the end of the file in order to force all logons and connections to use SSL. You can view the complete vsftpd file here (obsolete file removed).

Editing the Mac OS X Hosts File

Apple has a support article for editing a hosts file which you can refer to if you’re using a version of OS X prior to 10.2. For my purposes I’ve decided to use a dev subdomain for the sites on my virtual server. So the website on my vm will be dev.osquest.com. I’ll add this to the local hosts file on my Mac so that it will resolve to my Ubuntu VM. I also add a fictitious domain just so I can test Apache with multiple domains. I’ll use myfakedomain.ray as this domain. Because I’ll be resolving this name locally the fact that’s it an invalid domain extension isn’t a problem.

I start terminal on my Mac and load the hosts file into the nano editor, using admin privileges:

sudo nano /etc/hosts

I want the domains to be dev.osquest.com & fakedomain.com, the ip address of my vm is so I add the following lines at the end of the hosts file:     dev.osquest.com     fakedomain.ray     www.fakedomain.ray

I add the www for myfakedomain so I can test both methods of addressing a domain.

Once I save the file I can ping the server by name from terminal:


If a site was already set up in Apache, or the default site was enabled, I could access it  through the browser. It might be necessary to clear the DNS cache of the Mac if you make multiple changes. Run dscacheutil -flushcach from terminal to clear the cache in Leopard and lookupd -flushcache to clear the cache in Tiger. I can still access my production website from my Mac because only the dev subdomain is directed to the VM by my hosts file.

Self-Signed SSL Certificate

Because this is only a test server I’m going to set it up with a self-signed SSL certificate. With earlier versions of Ubuntu a self-signed certificate could be easily created by running sudo apache2-ssl-certificate. This script is no longer part of Ubuntu (because it was dropped by Debian) so I had to use a workaround. I already installed SSH so I already have the tools needed to generate a self-signed certificate.

I’ll use make-ssl-cert to generate the certificate. By default the certificate is only good for a month but I don’t want to generate a new certificate every month. A ten year certificate for testing should do nicely (well almost 10 years, I’ll ignore the days added in leap years). I’ll need to edit make-ssl-cert so I load it into nano.

sudo nano /usr/sbin/make-ssl-cert

Scroll to line 118 (at least in my file) or search for openssl req until you see the line:

openssl req -config $TMPFILE -new -x509 -nodes -out $output -keyout $output > /dev/null 2>&1

Change it to:

openssl req -config $TMPFILE -new -x509 -nodes -out $output -keyout $output -days 3650 > /dev/null 2>&1

Note the added -days 3650 parameter which will create a 10 year certificate. Once the modified file is saved I can create the certificate.

First I create a directory for the certificates:

sudo mkdir /etc/apache2/ssl

Then I create the certificate:

sudo /usr/sbin/make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/apache2/ssl/apache.pem

Enabling SSL

Next up I need to configure the default Apache site to listen for SSL connections. If I had already configured other sites I’d need to configure those too. This is well covered in the Virtual Hosts section of this document. I won’t repeat all the steps here, but here’s my updated virtual host file: view file (obsolete file deleted)

In my installation the default ports.conf file was already set to listen on port 443 if the ssl module is loaded, but be sure to check it (it’s in /etc/apache2):


And finally, I need to enable the SSL module…

sudo a2enmod ssl

and reload Apache to enable all the changes I made:

sudo /etc/init.d/apache2 force-reload

Testing & Summary

I still haven’t created the actual dev.osquest.com website but any connections should be sent to the default website. I test a http and https connection and I get the “It Works” page that I created for the default site.

The self-signed certificate isn’t suitable for a production environment but it’s fine for testing. I can tell my browsers to always accept the certificate since I know how they’re created. But no one else would trust them (at least they shouldn’t). The screenshot below shows the certificate as seen by Firefox.


Also, only one certificate per IP address can be used, so if I host multiple websites all but one of the sites will generate a second error saying that the certificate wasn’t issued for the site being accesses (this assumes that one site does in fact match). I’d have to assign each site a unique IP address to get around this.

So now I can access the web server on my vm by name, I can upload files via SFTP and I can test SSL pages. I guess I’ve put it off long enough and I’ll have to start building some websites.

Additional Reading

This thread on the Ubuntu Forum has a short discussion on the dropping of the apache2-ssl-certificate script from Ubuntu along with some workarounds, including the one I used.