Ubuntu Server Project #9: SFTP, Fake DNS, and Apache SSL

Things are moving along with the Ubuntu Server Project but there’s a bunch of small tasks and configuration changes that will make life easier going forward. This article will cover installing vsftpd, setting up a self-signed SSL certificate in Apache, and configuring my local Mac to access the Ubuntu server virtual machine by name. Even though the server is a VM sitting on my Mac and not accessible from the Internet I’ll still be treating it as if it was on the Internet and needs to be secure.

vsftpd

The installation of vsftpd is simple using aptitude: sudo aptitude install vsftpd

Since I don’t plan to use regular ftp, just sftp, I don’t have to make any changes in the iptables firewall settings. SSL connections are already allowed through and I still want to block regular ftp connections. I also want to limit connections to just the users that are set up on the server.

I fire up Transmit (my FTP client) and set up a connection with the following settings:

Server: 10.0.1.200  (the IP address of the VM)

User Name/Password: I leave these fields blank because I’m using SSL certificates from this Mac

Port: 22222  (the SSL port I configured)

Protocol: SFTP

I don’t set up any default remote path. I connect in and it defaults to the home directory of my Ubuntu ID. I try a regular ftp connection, as expected it fails due to the firewall. Even though I’m good to go I’m going to go through the vsftpd configuration file and make some changes as if this was a live server. I load the file into the nano editor (sudo nano /etc/vsftpd.conf) and scroll down the file. It’s well commented, although it doesn’t contain all the configuration options.

I turn off anonymous ftp by changing anonymous_enable to anonymous_enable=NO.

At the end of the file I add ssl_enable=YES to explicitly turn on SSL. Even though they are documented as the default settings I also add force_local_data_ssl=YES and force_local_logins_ssl=YES to the end of the file in order to force all logons and connections to use SSL. You can view the complete vsftpd file here (obsolete file removed).

Editing the Mac OS X Hosts File

Apple has a support article for editing a hosts file which you can refer to if you’re using a version of OS X prior to 10.2. For my purposes I’ve decided to use a dev subdomain for the sites on my virtual server. So the website on my vm will be dev.osquest.com. I’ll add this to the local hosts file on my Mac so that it will resolve to my Ubuntu VM. I also add a fictitious domain just so I can test Apache with multiple domains. I’ll use myfakedomain.ray as this domain. Because I’ll be resolving this name locally the fact that’s it an invalid domain extension isn’t a problem.

I start terminal on my Mac and load the hosts file into the nano editor, using admin privileges:

sudo nano /etc/hosts

I want the domains to be dev.osquest.com & fakedomain.com, the ip address of my vm is 10.0.1.200 so I add the following lines at the end of the hosts file:

10.0.1.200     dev.osquest.com

10.0.1.200     fakedomain.ray

10.0.1.200     www.fakedomain.ray

I add the www for myfakedomain so I can test both methods of addressing a domain.

Once I save the file I can ping the server by name from terminal:

Ping

If a site was already set up in Apache, or the default site was enabled, I could access it  through the browser. It might be necessary to clear the DNS cache of the Mac if you make multiple changes. Run dscacheutil -flushcach from terminal to clear the cache in Leopard and lookupd -flushcache to clear the cache in Tiger. I can still access my production website from my Mac because only the dev subdomain is directed to the VM by my hosts file.

Self-Signed SSL Certificate

Because this is only a test server I’m going to set it up with a self-signed SSL certificate. With earlier versions of Ubuntu a self-signed certificate could be easily created by running sudo apache2-ssl-certificate. This script is no longer part of Ubuntu (because it was dropped by Debian) so I had to use a workaround. I already installed SSH so I already have the tools needed to generate a self-signed certificate.

I’ll use make-ssl-cert to generate the certificate. By default the certificate is only good for a month but I don’t want to generate a new certificate every month. A ten year certificate for testing should do nicely (well almost 10 years, I’ll ignore the days added in leap years). I’ll need to edit make-ssl-cert so I load it into nano.

sudo nano /usr/sbin/make-ssl-cert

Scroll to line 118 (at least in my file) or search for openssl req until you see the line:

openssl req -config $TMPFILE -new -x509 -nodes -out $output -keyout $output > /dev/null 2>&1

Change it to:

openssl req -config $TMPFILE -new -x509 -nodes -out $output -keyout $output -days 3650 > /dev/null 2>&1

Note the added -days 3650 parameter which will create a 10 year certificate. Once the modified file is saved I can create the certificate.

First I create a directory for the certificates:

sudo mkdir /etc/apache2/ssl

Then I create the certificate:

sudo /usr/sbin/make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/apache2/ssl/apache.pem

Enabling SSL

Next up I need to configure the default Apache site to listen for SSL connections. If I had already configured other sites I’d need to configure those too. This is well covered in the Virtual Hosts section of this document. I won’t repeat all the steps here, but here’s my updated virtual host file: view file (obsolete file deleted)

In my installation the default ports.conf file was already set to listen on port 443 if the ssl module is loaded, but be sure to check it (it’s in /etc/apache2):

portsconf

And finally, I need to enable the SSL module…

sudo a2enmod ssl

and reload Apache to enable all the changes I made:

sudo /etc/init.d/apache2 force-reload

Testing & Summary

I still haven’t created the actual dev.osquest.com website but any connections should be sent to the default website. I test a http and https connection and I get the “It Works” page that I created for the default site.

The self-signed certificate isn’t suitable for a production environment but it’s fine for testing. I can tell my browsers to always accept the certificate since I know how they’re created. But no one else would trust them (at least they shouldn’t). The screenshot below shows the certificate as seen by Firefox.

certificate

Also, only one certificate per IP address can be used, so if I host multiple websites all but one of the sites will generate a second error saying that the certificate wasn’t issued for the site being accesses (this assumes that one site does in fact match). I’d have to assign each site a unique IP address to get around this.

So now I can access the web server on my vm by name, I can upload files via SFTP and I can test SSL pages. I guess I’ve put it off long enough and I’ll have to start building some websites.

Additional Reading

This thread on the Ubuntu Forum has a short discussion on the dropping of the apache2-ssl-certificate script from Ubuntu along with some workarounds, including the one I used.

Ubuntu Server Project #8: Apache Configuration

Images in this 9 year old article have been lost.

The previous article in my Ubuntu Server Project series covered the installation of Apache. Now it’s time for some configuration. I’ll start off by looking around the Apache installation ten make some minor configuration changes.

The Apache config folder is /etc/apache2 which contains the following files & folders:

 

The names in blue are folders.

The sites-available folder has what it says, the sites that are available. But just because they’re available doesn’t mean they’re enabled. There’s one site in the sites-available folder, the default site.

 

To check which sites are actually enabled I view the contents of the sites-enabled folder:

 

This folder contains symlinks back to the sites-available folder for the enabled sites. As expected, the default site is already enabled. If a domain points to the server but doesn’t have a configuration files the first enabled sight (alphabetically) will be used. So by having the name 000-default this site is likely to be used.

The mods-available and mods-enabled folders work the same way. They contain the modules that are available and those that are enabled. These are the available modules with the default installation:

 

While these are the modules that are enabled by default:

 

There are four commands that make managing the enabled sites and modules easier than having to create the symlinks manually by using ln -s. They are:

a2ensite and a2dissite enabled and disable a site.

a2enmod and a2dismod will enable and disable a module.

So to disable the default site I run:

sudo a2dissite default

and get the following message:

Site default disabled; run /etc/init.d/apache2 reload to fully disable.

I gracefully reload Apache (so existing connections aren’t killed) with:

sudo apache2ctl graceful

When I try to access the site via by browser I get a 404 not found error instead of the “It Works” message. I also see that the symlink is gone from the site-enabled folder.

To enable the site again I execute:

sudo a2ensite default

sudo apache2ctl graceful

And the “It works” message returns. Now that I’ve looked around the structure of Apache it’s time to look at the configuration.

Port Configuration

By default Apache will listen on port 80 for http and port 443 for https (ssl). These are set in /etc/apache2/ports.conf, the contents of which are:

 

There’s no need for me to change anything.

Timeout & KeepAlive Configuration

The main Apache configuration file is /etc/apache2/apache2.conf which I open in the nano editor:

sudo nano /etc/apache2/apache2.conf

I scroll down the file and look at the various parameters. The first one I change is the timeout value, the default of which is 300 seconds. I change it to 45 seconds.

Timeout 45

Next up is KeepAlives, which are on by default (KeepAlive On). This allows persistent connections for a client so that each request (image, file, etc) doesn’t require a new connection. There are some additional KeepAlive parameters.

MaxKeepAliveRequests is described as “the maximum number of requests to allow during a persistent connection. Set to 0 to allow an unlimited amount. We recommend you leave this number high, for maximum performance”. I keep the default MaxKeepAliveRequests 100.

KeepAliveTimeout is described as “the number of seconds to wait for the next request from the same client on the same connection”. The default value is a rather high 15 seconds. There’s not a lot of interactivity on my pages so I’ll lower it to 3 seconds. If no new requests come in during that time the connection will be dropped. I change this to KeepAliveTimeout 3.

Pre-Fork MPM

During the installation I selected Pre-fork MPM (apache2-mpm-prefork) which is described in the Apache documentation. I’ll keep these settings at the default. The related settings are shown below.

 

ServerTokens

ServerTokens determines what information is set in the headers concerning products and modules installed. The default is Full which sends a lot of information. While it doesn’t actually make things more secure there’s no sense broadcasting the information so I change it to ServerTokens Prod which will just include Apache in the header.

ServerSignature

By default, server generated pages such as the 404 error page include a footer with server information.

 

Again, it won’t make things more secure but there’s no sense providing this information. I change ServerSignature to ServerSignature Off.

Virtual Host File

Each virtual host also has a configuration file which could overwrite the main configuration file. In this can the ServerSignature setting doesn’t work because it’s also set in the virtual host file. So I save the main config file and open the virtual host file.

sudo nano /etc/apache2/sites-available/default

I change the ServerSignature parameter to Off just like I did in the main file.

I do a graceful restart of Apache with sudo apache2ctl graceful and test the change. Now there’s no footer in my 404 page.

 

Summary

This completes the configuration of the server software that can serve as a training platform and a solid test bed for my WordPress test environment. Next on the agenda is coming up with a directory structure for my web sites and setting up the virtual hosts.

Ubuntu Server Project #7: Apache & PHP Installation

Images in this 9 year old article have been lost.

Now that MySQL is installed the only remaining server software is Apache, PHP and WordPress. This time around I’ll install Apache & PHP. The installation is quick & easy. I’ll use aptitude to install both of them. As a refresher, if you haven’t read the previous articles, I’m building a WordPress test environment on a Ubuntu Server server that’s running in a VMware Fusion virtual machine.

I connect to the server using a SSH connection and mount the CD-ROM image with the command mount /cdrom/ so that the Apache and PHP software can be installed from the CD image.

First off I install Apache by running:

sudo aptitude install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 ssl-cert

Aptitude will take care of any dependencies. The installation runs without a problem. Apache 2.2.4 is the version installed.

I connect to the server from my MacBook (which is a different machine that the vm is running on) using the IP address of the Ubuntu server in my browser (http://10.0.1.200) and the following screen is displayed:

 

If I click on the apache2-default link I get the screen:

 

To make this second page the default, rather than the directory, I open the default vhost file:

sudo nano /etc/apache2/sites-available/default

Then I search for the line RedirectMatch ^/$ /apache2-default/ and uncomment it by removing the # and saving the file.

I then reload Apache so I can test the change. To reload Apache:

sudo /etc/init.d/apache2 reload

PHP Installation

I pick what appear to be the most common PHP modules for installation and install them through Aptitude which will handle the dependancies.

sudo aptitude install php5 libapache2-mod-php5

There are plenty of other PHP modules but I’ll start with the ones I know I’ll need and add others (like for MySQL) as they’re needed. I figure this way I’ll have a better idea of exactly what software depends on what other software or modules.

I reload Apache one last time and I’m done:

sudo /etc/init.d/apache2 reload

PHP 5.2.3 is the version installed.

This was a nice short article, not much more than a couple Aptitude installs. At this point I have a working LAMP server. Next up I’ll configure Apache.

Ubuntu Server Project #6: MySQL Server Installation

It’s been over a month without any activity, but the SQL server build is moving forward again. In the previous five installments I was installing, configuring and getting comfortable with the the basic Ubuntu install. The past articles can be found in the Ubuntu Server Project section on my Linux page.

Today’s task is to install MySQL, which is very straight-forward. I’ll make some guesses at a low-memory configuration but I’ll wait until the server is completely up before drilling deep into the optimizing.

I’ll be needing the original Ubuntu server CD for the installation. So I make sure that the CD drive for the VM is configured to open the ISO file with the Ubuntu server CD image. After I start the VM I connect with a remote SSH terminal session and mount the CD with the command mount /cdrom/.

Since the server has been untouched for over a month I start off by making sure the installed software is already up to date by executing the following commands:

sudo aptitude update

sudo aptitude safe-upgrade

Once this is done I’m ready to go. To install MySQL I use the following aptitude command:

sudo aptitude install mysql-server mysql-client

MySQL-server and MySQL-client are meta packages. They’ll install the latest server and client software that’s in the repository, which is currently MySQL 5.0.45. After running the command aptitude tells me the list of packages that will be installed.

 

Then I’m prompted to insert the CD, which is already mounted, so I hit <return> to move along.

I’m then prompted to enter a root password for MySQL. Even though this is a private server off the Internet I go ahead and enter one. There’s no confirmation prompt so I type carefully.

 

The installation runs and finishes a couple of minutes later without any problems. Now it’s time to configure MySQL for my small 256MB server.

Configuring MySQL for a 256MB Server

I’m not using the word optimize here for a specific reason. I won’t start optimizing until I have a WordPress site built and can better test performance. I’ll be configuring MySQL to have a small memory footprint now and I’ll optimize later.

First I check the current SQL memory usage by running top then pressing <shift>-<m> to sort by memory usage. This shows MySQL using about 6% of memory.

 

I open up the MySQL configuration file in the nano editor:

sudo nano /etc/mysql/my.cnf

The first thing I want to do is disable InnoDB. InnoDB is a storage engine but WordPress uses MyISAM by default so I don’t need it. Since it uses a great deal of memory I’ll turn it off by uncommenting (remove the #) the line skip-innodb.

I then search for and change the following values:

keybuffer from 16M to 16K

This is probably too small a value. But I’ll start low and raise it once all the other server software is installed and I start evaluating performance. From what I’ve read this setting is critical to performance and I’ll probably want to increase it.

max_allowed_packet from 16M to 1M

I’ll also look at increasing this once everything is installed. Memory is only allocated when needed but I shouldn’t be sending to much data in and out of MySQL through WordPress

thread_stack from 128K to 64K

This server is not going to have a lot of concurrent connections so we’ll start low and see how things look.

thread_cache_size from 8 to 4

This is the number of threads that are cached (after a user disconnects). New threads are only created when there’s none in the cache. Again, I lowered it because this will be a lightly used server.

I then added the following two new parameters just after thread_cache_size.

sort_buffer = 64K

I’m again starting with the smallest value that I’ve seen recommended for low memory servers and I’ll work my way up.

net_buffer_length = 2K

This is the starting size for the connection and result buffers. Both shrink to this size after each SQL statement.

After making all the changes I save the file and restart MySQL with the command:

sudo /etc/init.d/mysql restart

Then I run top and check the memory again. Now MySQL is only using 1.9% of memory.

 

So I’ve dropped memory usage by 2/3. Performance tuning will wait for a later session. Apache and PHP 5 are next on the to do list.

Reference:

Optimizing Apache & MySQL for Low Memory (Part 2) at Emergent Properties

MySQL Memory Usage at MySQL.com

InnoDB article at Wikipedia

Ubuntu Server Project #5: Getting Comfortable With Ubuntu

This is a bit different than the other posts as I won’t actually be installing any major software. Instead I’ll be customizing Ubuntu to make it easier for me to use and finding programs to monitor my server.

System Information

First I’ll want some commands that tell me about the system. Since there’s only 256MB of memory allocated to this Ubuntu Server virtual machine I’ll want to keep tabs on memory usage. I can do this with the free command and use -m to have the info displayed as easy to read megabytes.

free -m

This will display the amount of memory used.

 

The first line includes cached memory so I’m more concerned with the second line which shows I’m using 16MB and have 233MB free. The third line shows I’m not using any swap space which is nice. This will be my baseline and I can monitor it as I install software.

If I want more detailed memory usage I can use cat /proc/meminfo.

If I need a reminder of the version I’m using I can use cat /etc/issue which will display the Ubuntu version. lsb_release -a can also be used to display version information.

The top command displays information on running processes and system resources. It’s updated in real time and you can exit by typing q. Pressing <shift>-<m> while top is running will sort the processes based on memory usage.

uname -a prints the machine name and kernel information along with a few other things.

 

As the above output shows it was necessary for me to use a different kernel in order to run Ubuntu under Parallels.

df -h can be used to display disk usage in MB. -h means human readable as opposed to blocks.

Screen

Screen is a terminal multiplexer that allows multiple sessions in one terminal window much as the console does. In addition, it provides the ability to disconnect a session and return to it later, or continue processing if a session is interrupted.

To install screen I execute:

sudo aptitude install screen

As a side note: Even though I left the Ubuntu Server CD image connected to the VM I had to mount it manually for aptitude to use it. I issued mount /cdrom to mount it.

There’s a good screen tutorial at Linux Journal so I won’t go into it here.

Build-Essentials

Build-Essentials is a Ubuntu meta-package of programs that are frequently needed to properly install other programs so I want to install it. I run:

sudo aptitude install build-essential

The install is problem free.

Shortcuts (Aliases)

There’s some commands I’m going to be using a lot. To save time typing, especially since my typing is pretty bad, I set up some aliases. I open my bash configuration file in the nano editor so that I can add some aliases.

nano ~/.bashrc

I scroll down until I find the Alias Definitions section.

image lost

 

I uncomment the last 3 lines shown above so that I can put the aliases in a file. I could add the aliases in this file but I like the idea of using a separate file just for the aliases. Remove the # to uncomment the lines. I save the file then use nano to create the ~/.bash_aliases file.

nano ~/.bash_aliases

I add the following aliases to the file:

alias free="free -m"
alias install="sudo aptitude install"
alias newalias="nano ~/.bash_aliases"
alias remove="sudo aptitude remove"
alias update="sudo aptitude update"
alias upgrade="sudo aptitude safe-upgrade"

The first one makes it slightly easier to get free memory, the third opens the alias file for editing while the other simplify the aptitude command line. To run the command I can just type the alias, adding any necessary command-line options after it. It’s necessary to logout and login when making these changes since the bash configuration is only read during logon.

 

Well, I’ve got aliases to make my life easier and I’ve got system utilities to monitor resource usage as I install new software. Next on the agenda is the MySQL installation.

Ubuntu Server Project #4: Iptables Firewall

Continuing along the security theme set by the previous article I’ll configure some simple iptables firewall rules for my Ubuntu Server virtual machine. Iptables can be pretty complicated and I won’t attempt to go into great detail. Since this is a virtual machine only accessible from within my home network I have the luxury of being able to play without having to actually be concerned with security. So iptables will be set up for the experience and for future testing.

Iptables is installed with every Ubuntu installation so there’s nothing new to install. We just need to configure the rules that iptables needs to use. Since I’m setting up a web server I’ll create rules to allow SSH (port 22222), HTTP (port 80) and HTTPS (port 443) traffic.

I’m going to create two files that contain the iptables rules. One will be used for testing and the other will be for production. The production rules will be permanent and load during reboots. The test rules will be in file /etc/iptables.test.rules and the production rules will be in file /etc/iptables.prod.rules.

The Rules

I connect to the Ubuntu server using SSH from the terminal on my Mac. Everything done related to iptables has to be done as root so I issue the command:

sudo -i

and enter my password when prompted. Now I won’t have to use sudo as a prefix for each command.

For my first step I’ll save any existing rules to the production file using the command:

iptables-save >/etc/iptables.prod.rules

On my freshly installed Ubuntu server this generated the following file contents:

image lost

 

To list the current filter rules on the screen I run iptables with the -L switch.

iptables -L

which results in the following information:

image lost

 

What the above means is that anything from anyone on any port will be accepted. I’m not a fan of the theory that as long as nothing is running on the ports then nothing needs to be blocked. I am a fan of blocking everything except traffic which this server is intended to handle. So I’ll be setting up some rules to restrict traffic. Initially I’ll be doing this in the /etc/iptables.test.rules file. During this time I’ll keep my existing terminal connection active and actually start a second session just to be sure. This way if a test rule blocks SSH I’ll have an existing connection that I can make the change with. (OK, it’s a VM on my Mac so no second session, but if it was a remote server I’d set up the second session as a safety measure.)

I start off with some very simple rules which are based on information found in the Ubuntu Documentation Iptables HowTo. Rules are processed top to bottom and once a decision is made about a packet no more rules are processed.

A lot of traffic on the server uses the loopback port and we want to allow it all. No reason to stop intra-server communication. So I add the lines:

-A INPUT -i lo -j ACCEPT
-A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT

The first line says to accept all traffic on the loopback port. The second rule says to reject all traffic that uses the loopback address but isn’t on the loopback port. -A means append the rule to the chain. INPUT is the chain to add the rule to. Valid chains are INPUT, FORWARD and OUTPUT as shown in the previous screenshots. -i means to only match if the traffic is on the specified interface. lo is the loopback interface. -j is the action to take with the packet. Valid actions are ACCEPT, REJECT (Reject and notify sender), DROP (silently ignore) and LOG. The ! in the second line means “not” so in this case it means traffic not on the loopback adapter. -d indicates the destination and can be an ip address or port. In this case it’s the loopback address.

Then I’ll add a rule to continue to accept all established connections:

-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

State matches are described in greater detail at faqs.org. But this rule says to accept all traffic for an ESTABLISHED connection that has seen traffic in both directions. It will also accept traffic for new connections if it’s associated with an established connection, these are the RELATED packets.

Next I’ll allow all outbound traffic. I’ll leave restricting outbound traffic for another day.

-A OUTPUT -j ACCEPT

Now I’ll enable web traffic on the common ports of 80 for HTTP traffic and 443 for HTTPS traffic.

-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT

The -p specifies the connection protocol used, in this case tcp and dport indicates the destination port.

Now I’ll allow SSH traffic. Use the same port specified in the ssh_config file. In my case it was port 22222.

-A INPUT -p tcp -m state --state NEW --dport 22222 -j ACCEPT

In this rule the state parameter is used to allow the creation of NEW connection. The previously defined rule for established connections will apply once the connection is created by this rule.

Next up is a rule to allow pings.

-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT

In this rule icmp is the protocol used. A complete list of icmp types is at faqs.org which shows 8 as a “echo request” type.

Now I’ll create a rule to log incoming packets that are denied by iptables.

-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7

This rule will log denied packets, up to 5 a minute. It will prefix the log entries with “iptables denied: “. The LOG action doesn’t stop rule processing so the packets will be processed by any following rules. The reason we know these packets will be refused is because the only rules that follow will reject the packet. So if a packet has reached this rule there isn’t a chance for it to be accepted.

So the rules to deny any remaining packets are:

-A INPUT -j REJECT
-A FORWARD -j REJECT

The rules file needs to begin with *filter and end with COMMIT. The complete iptables rules file is available as a text file.

Enforcing the Rules

I save the rules to /etc/iptables.test.rules and then run the following command to load them in:

iptables-restore </etc/iptables.test.rules

The to see if anything actually changed I run iptables -L and compare it to the previous results. As the screenshot below shows they are different.

 

image has been lost

Now it’s time to test the critical SSH connection. I open a new terminal window and try a connection. It works and the other rules seem correct so I’m all set. If it failed I’d still have my existing connection to fix the problem (assuming the rules to allow existing connections took affect).

Now I need to make these rules permanent. First I’ll save them to my production rules file:

iptables-save >/etc/iptables.prod.rules

Now I need to make sure the rules are loaded at startup. I load the file /etc/network/interfaces in the nano editor. I add the following line at the end of the loopback section:

pre-up iptables-restore </etc/iptables.prod.rules

The screenshot below shows my updated interfaces file.

image has been lost

The final test is to restart Ubuntu server and make sure the rules are still in place.

So now I have a basic server setup and it’s running a simple firewall. I’ll probably spend a little time exploring Ubuntu before I start installing the server software.

 

Ubuntu Server Project #3: Networking & SSH Setup

This post is obsolete and screenshots have been removed.

This is the third installment in my Ubuntu Server Project series which documents my efforts to get a working copy of WordPress running on Ubuntu Server 7.10. It’s summarized, with links to past articles, on my Linux page.

Ok, technically security should have been set up immediately after installation so this should have been the second installment and not the third. But Ubuntu was a VM on my own desktop and wasn’t on the Internet so I wanted everything nice and up to date before proceeding.

Setting Up Networking

I could keep working with the Ubuntu 7.10 Server locally on my desktop and get right to the installation, but I want to start dealing with it as if it’s a remote server. So the first thing I’m going to do is get the IP Address and Mac Address for my Ubuntu server so I can connect remotely. I log onto the console and issue the command ifconfig to get the ip address along with the mac address. The screenshot below shows the results on my Ubuntu Server (click to enlarge) with the ip and mac addresses indicated.

 

It’s worth mentioning that I set up the Parallels vm to connect via a bridged network so it gets it’s own unique ip address rather than sharing (via NAT) with the host OS. While the IP address will probably stay the same, it’s assigned by DHCP and could change. It’s my internal DHCP server (actually my Airport Extreme) so I’m going to reserve the DHCP address for this Ubuntu Server instance. To do that I need both the IP and Mac addresses.

I’m concerned with the adapter labeled eth0. The ip address is on the second line and is labeled inet addr. The mac address is on the first line and is indicated by HWaddr. Most home routers can do DHCP reservations although the methods vary. Look for the tern DHCP reservation. All you should need is the ip and mac addresses. A note for Airport Extreme users like me – even though there’s no good reason for it, adding a DHCP reservation forces a router restart.

If you don’t want to set up a reservation you can just lookup the ip address when it changes and you can’t connect.

Installing SSH Server

I want to install an SSH server so I can securely connect to the server remotely. (Remember, I’m treating this like a remote server.) I log on to the Ubuntu console and run the following commands:

I want to make sure my package list is up to date:

sudo aptitude update

Then to install the SSH server:

sudo aptitude install ssh

Aptitude tells me (click for full size):

 

and I approve the installation which finishes without error. SSH server is installed am I’m done with the SSH server install. For information about aptitude see my previous article.

The whole point of SSH is security. In the next step we’ll see that our first SSH connection from a workstation says the host is unknown and provides a fingerprint. Now, this is a internal private network and the host is really a VM running on the same machine and we’ll be connecting via IP address. But for security purposes we’ll get the “RSA key fingerprint” while we’re here. I execute the command (on Ubuntu server):

ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub

Note that I don’t need to use sudo. As the extension .pub implies, this information is public for all to see. The response I get is:

2048 64:93:11:41:b7:31:cf:66:41:cb:7c:4f:37:3b:89:e8 /etc/ssh/ssh_host_rsa_key.pub

That long colon delimited number is the servers RSA Key Fingerprint. Whenever I attempt a SSH connection from a new machine I will be presented with that number. If it doesn’t match then I’m connecting to another machine, either by error or by mischief.

There’s also another type of key generated during the install called a dsa signature key which is another form of key signature. To get this fingerprint execute:

ssh-keygen -l -f /etc/ssh/ssh_host_dsa_key.pub

From this point on I will do everything on my Mac and treat the Ubuntu Server as if it’s a remote server. Although for simplicity doing the server stuff from the console running the Ubuntu server and doing the local computer stuff from terminal would be much simpler.

Setting Up SSH Public/Private Key

SSH provides secure, encrypted access to the server’s console. I’ll set up a public/private key for my iMac and the server, this way when I want to connect I don’t need to enter a password. Public/private keys should only be used when the local workstation is secure since anyone who has access to the workstation can access the server.

I’m going to test the SSH connection before proceeding. I open terminal on my Mac and execute:

ssh ray@10.0.1.200

I’m told the authenticity of the host can’t be established and I’m presented that 16 digit(hex) number. It matches what I know to be the server so I type yes to continue connecting. The I’m told Warning: Permanently added '10.0.1.200' (RSA) to the list of known hosts. This means future SSH logons from this machine will not generate the authenticity error. The SSH connection is working.

I logout of the connection but stay in terminal. (I could just open another terminal window, but I’m easily confused.)

First, I’ll create a folder on my local Mac to hold the keys. I execute:

mkdir ~/.ssh

This folder may already exist, and should have been created when the server was added to the known hosts list. If it does exist you’ll get an error that it can’t be created and you can move on. The ~ indicates your user home directory. The folder will be created in your home directory and the “.” means it will be hidden (at least in Finder).

Now I create a public/private key combination for my Mac by executing:

ssh-keygen -t rsa

This will generate a public/private key using rsa encryption. Two files will be created in ~/.ssh called id_rsa and id_rsa.pub. The private key is id_rsa and should never be put in any public place. The public key is id_rsa.pub. During the key creation I was asked to confirm where I wanted to put the files and if I wanted a passphrase. I accepted the default for location and hit enter for an empty passphrase.

Now I copy this to the server using the secure copy command.

scp ~/.ssh/id_rsa.pub ray@10.0.1.200:/home/ray/

This will copy the public key file to my home directory on the server. I’m prompted for a password but since scp encrypts the password it’s safe to enter it. Change the ip address to your own address and substitute your ID for ray.

Now I need to configure the public key on my Ubuntu server. Still in terminal I execute

ssh ray@10.0.1.200

and enter the password to connect to the Ubuntu server console. I’ll create a directory for the authorized public keys and move my key into it, changing the name of the file in the process.

mkdir ~/.ssh

mv ~/id_rsa.pub ~/.ssh/authorized_keys

This copies the id_rsa.pub file to the newly created .ssh directory and renames it to authorized_keys. Now I need to set the permissions for the directories.

chwon -R ray:ray ~/.ssh

This changes ownership of the directory. -R means to apply recursively and I’m saying to change to owner to the user and group ray. Substitute whatever ID you created.

chmod 700 ~/.ssh

chmod 600 ~/.ssh/authorized_keys

This changes the access permissions for the directory and file. The 700 means only my ID can read, write, or execute files in the directory. The 600 means only I can read or write the file (no execute privilege).

Now I need to configure the SSH server.

Execute:

sudo nano /etc/ssh/sshd_config

The ssh_config file is loaded in the nano text editor. Scroll up and down using the arrow keys. Help is along the bottom, ^ means the control key.

Scrolling down the file I make the following changes:

port 22222

Near the top you’ll see Port 22. For security purposes it’s good to change this port number, since it makes it a little harder for people to find the SSH server connection. You need to pick a port that’s above 1024 and that’s not being used on your system. Port number in the range 1024 to 49151 may be registered and used by specific applications. Port numbers between 49152 and 65535 are dynamic and aren’t reserved for any particular use. You can pick any port above 1025 as long as it won’t be used by something else on your server. A list of registered ports is maintained by iana. I picked 22222 because it’s easy to remember and not currently registered to anyone.

PermitRootLogin no

This means the root user can’t log in through ssh. This is a bit redundant with Ubuntu since the root user can’t logon in a typical installation.

AuthorizedKeysFile %h/.ssh/authorized_keys

I just needed to uncomment this by removing the # at the beginning of the line. Notice it points to the public key file we created (%h is expanded to the user’s home directory).

PasswordAuthentication yes

I uncomment this so that I can log on with password in addition to keys. The key will be used if available, if not there will be a password prompt. Setting this to no means the key must always be used. If all your PCs are secure and can use public/private keys you can set this to no, which means that the keys must be used. Just don’t lose the keys.

X11Forwarding no

Since there’s no GUI on this server so I turned this off.

UsePAM no

I’m not using the PAM module.

I added the following new lines at the end of the file.

UseDNS no

I’ve seen there were some past issues resolved with this setting and I don’t need DNS lookups for my clients.

AllowUsers ray

This specifies which users are allowed to connect via SSH. Separate multiple users with spaces.

I write the file with ^O and then exit with ^X. (^X will prompt to save but I’m paranoid and save first anyway).

Finally I need to restart SSH so I enter:

sudo /etc/init.d/ssh restart

Then I logout and login again. If everything is set up right I shouldn’t be prompted for a password, and I’m not. The proper ssh command (from OS X terminal) with the port change is:

ssh -p 22222 ray@10.0.1.199

If you want to enable the dsa key instead, or create the dsa keys in addition to the rsa keys you can repeat the process, substituting dsa for rsa. Instead of the command mv ~/id_rsa.pub ~/.ssh/authorized_keys you will need to concatenate the new file with the authorized_keys file. Use the following command to do this after copying id_dsa.pub to your home directory.

cat ~/id_dsa.pub ~/.ssh/authorized_keys >~/.ssh/newkeys You can chain multiple key files together in one command. Then copy the newkeys file over the authorized keys file:

cp ~/.ssh/newkeys ~/.ssh/authorized_keys

To delete the id_rsa.pub file from your home directory after it’s concatenated to authorized_keys run

rm ~/id_rsa.pub

I can repeat the public/private key generation from my other computers and use the above concatenation command to add the public keys to the authorized public keys list or stick to passwords since I won’t be using those computers very often.

So the server is up and running and we can securely connect. Next up I’ll get a basic firewall going and then I’ll finally be ready to install some software.

Additional Reference

OpenSSH Quick Reference (PDF)

SSH Host Key Protection – An article From Security Focus that describes the use of SSH and provides some background.

OpenSSH.com is the OpenSSH project website which has a OpenSSH FAQ.

Ubuntu Server Project #2: Updating the Install and the Basics

 

This is the second installment in my Ubuntu Server Project series which documents my efforts to get a working copy of WordPress running on Ubuntu Server 7.10. It’s summarized, with links to past articles, on my Linux page or go to the previous article about installing Ubuntu Server 7.10.

Most of my experience with the *nix command line is limited as there’s always been a GUI. I think the most I did was over 10 years ago when I did some work on HP-UX. So I’ll be starting with the very basics, and will probably get some things wrong.

First up I’ll be needing some command line basics.

Getting Help

The commands I’ll be using have man (think manual) pages on the system (at least the ones I’ll be using at first will). So first up I’ll need to know how to use man. The syntax of man couldn’t be simpler, it’s:

man command

where command is the Ubuntu command for which you want the manual page.

I’ll be using aptitude to update my Ubuntu install so I issue the command:

man aptitude

and the man page for aptitude is loaded. To navigate use the <spacebar> to move forward and the <b> key to move back. Hit the <q> key to exit. Man also has a bunch of switches to search and use numerous other features but I don’t need those now.

You can also get help for most commands by typing the command followed by the -h parameter. The text will probably be more than a screen can handle and some of it will scroll off the top. To scroll up use <shift>-<pageup> and use <shift>-<pagedown> to head back down. I’ve read that shift-uparrow and shift-downarrow can be used but they don’t work for me. Could be a Mac/Parallels thing rather than bad info. To get to the command prompt release the shift key and start typing your command or just hit any key to get there (be sure to delete anything that was typed). I’ve either started typing the command or just hit <return> to get to the command prompt.

Virtual Consoles

Ubuntu, and Linux in general, has virtual consoles even when at the command prompt. There are 6 of them. To switch virtual consoles type the keys <alt>-<Fn> where n is a number 1-6. For Mac users the alt key is the option key. Also, for fellow users of the new Mac keyboards the function keys default to their special features (Dashboard, Spaces, volume, etc…) so you’ll need to hold the <fn> key too. Or, you can do like I did and go into the keyboard section of System Preferences and enable standard function keys.

Then you’ll have to use <fn> for the special functions but not any apps or to switch consoles.

With virtual consoles I can use one for the man page and another for the actual commands. In addition, each console requires it’s own logon so different IDs could be used. Long commands can run in one console while I work in another.

Aptitude vs. Apt-Get

Ubuntu uses Debian’s Advanced Packaging Tool (apt). I came across two commands for managing this from the command line. They are aptitude and apt-get. They seemed similar but different so I figured I needed to pick one and stay with it. I decided to go with aptitude. I did read that mixing in apt-get after using aptitude could cause problems with aptitude because aptitude wouldn’t know about all the dependencies.

Aaron Toponce has a recent article with well laid out logic for aptitude which is based on this older explanation of aptitude. But there does seem to be a minor religious war over the best package management system.

Since I’m starting with a fresh system it seems aptitude is the way to go. It just made sense. Besides, if I get tired of the command line aptitude has a curses interface (menu system).

Apt-get does have super cow powers while aptitude does not, which is the only reason I considered using apt-get.

Updating Ubuntu Server 7.10 – It’s Why We’re Here

The whole goal here was to update my original Ubuntu installation and now I’m finally ready.

I’m logged onto the console with my ID and I need to enter two commands. I’ll be starting each command by specifying sudo which will run the command as the superuser. I’m using the default configuration so I’ll be asked to authenticate with my password which is the id/password created during the Ubuntu Server installation.

The two commands are:

sudo aptitude update

As a mentioned, sudo means run as superuser, aptitude is the package manager I’m using and update is the action that aptitude will perform. Update tells aptitude to get a list of new/upgradable package.

sudo aptitude safe-upgrade

The safe-upgrade action tells aptitude to upgrade packages to their latest version. Packages will not be removed unless they are unused. Packages which are not currently installed will not be installed. If it’s necessary to remove of install one package in order to upgrade another this action may not be able to handle it. Full-Upgrade can be used in this situation. Aaron Toponce has an article describing the difference between safe-upgrade vs. full-upgrade. As the name implies, safe is more conservative. If it fails to update a package I can do further research and make a decision. Full-upgrade was formerly called dist-upgrade.

I issue the update command and the package info is quickly update. The safe-upgrade command upgraded 16 packages without error.

I then re-issued each command to make sure there weren’t any further updates. There weren’t so I’m done. I saved a snapshot in Parallels and shut things down.

Summing Up

Even though this was the basics it does cover the things I had to learn to get going. Rather than following a book and having it set the agenda I figure I’ll learn as I go. Good idea or not?

If you think you’d prefer apt-get due to the super cow powers type apt-get moo to see if you want the feature.