Ubuntu Home Server: The OS Install and RAID Configuration

The first step for the Ubuntu Home Server was to examine the RAID options that were available with my existing hardware and software. Once that was done I built the server and have moved over 7 TB of data to it so far.

Cooler Master HAF32 Case

My Ubuntu Home Server caseAs I previously mentioned, I’ve decided use Ubuntu for my Home Server now that Windows Home Server will lose the biggest reason I loved it. Thanks to some well placed vacation I had five days to explore the possibilities. By Tuesday the Ubuntu Home Server had all my files and it was serving all my files. The WHS still handled backups, but that’s all. Since then I’ve settled on a configuration that seems to work for me and have moved 7 TB of data to the server so far.

I used Ubuntu Server 10.04.1 which is the Long Term Support version and will be fully supported until April 2015. This means I won’t be forced into an upgrade by a obsolete OS anytime soon. I’m looking for stability over cutting edge technology so I probably won’t be looking for a full OS upgrade anytime soon.

I’m using Samba to serve files to my Windows and OS X computers. This seems to be the weakest link in the Ubuntu Home Server chain. But since I’m only serving files within my house I’m avoiding most of the complexity.

I will lose the ability to do PC backups directly to the server, unless I find some replacement software. I’ll also lose the ability to run offsite backups to KeepVault from the server. So for now a scaled down WHS v1 will remain to handle these backup functions.

RAID Configuration

The RAID configuration was the nut I had to crack first. Linux doesn’t work well  with the fake RAID offered by my motherboard controllers (based on my research, not experience) so I didn’t consider it. So first off I broke the mirror that was set in the BIOS for the OS drives. From now on, everything would be set in Ubuntu.

The first time I installed Ubuntu Server it said that some SATA configurations had been found and asked if I wanted to activate the RAID. I answered “No”. I did several installs and was only prompted the first time so it must have seen traces of the old mirror on the drives.

I spent some time testing Ubuntu’s software RAID using mdadm and didn’t have any problems. I did a completely fresh install between each RAID configuration and deleted the partitions each time. Everything was done using the standard Ubuntu Server 10.04.1 DVD. I didn’t have to use any additional drivers. I did apply any post-DVD release updates and patches but stuck with the pre-configured Ubuntu repositories.

After I got comfortable with the software RAID I decided to try using the LVM (Logical Volume Manager) so that I could easily expand the drive in the future. This was easier than I thought, but I’m getting ahead of myself.

In all my configurations the two 320GB drives where mirrored to handle the root partition (the OS). So as I talk about the configurations I tried I’ll be ignoring this mirror.

I configured all the arrays during the installation phase. This seemed the easiest thing to do since it handled all the mount points.

I have two RAID controllers, an Intel controller on the motherboard with 6 ports and a 3Ware controller with 4 ports. (The 320GB drives are on the motherboard’s 2 port Gigabyte controller which is a third controller) All the drives I’m using are 2TB drives, but the 3ware had the same model Hitachi drives, while the Intel had two Samsung, two Western Digital and two Hitachi. The Hitachi’s are 7200rpm drives while the others are 5400rpm drives. I was concerned the different drives would cause a problem, but so far it’s been OK. I also haven’t found any information that this would be a problem. Still, If I had enough similar drives I would have used them. On the other hand, if I have to buy enough drives to match them up I’ll have an insurmountable budget problem.

My first configuration had two RAID 5 arrays, one for the drives on the Intel controller and one for the 3Ware controller based drives. This made sense to me since the performance within the array would be the same.  So I lost two drives for data protection, one in each array. I wasn’t using LVM at this time and I mounted the 6 TB (usable) 3Ware array as /home and the the 10TB (usable) 3ware array as /shares. The problem with this is I couldn’t come up with a good way to split the files. I’d either have wasted space or run out of room in the future. This is when I started looking at LVM.

But rather than using LVM to combine the two different arrays in one volume group I re-installed and configured one big RAID 5 array and mounted it as /home. This seemed to work well and I only lost one drive to data protection.  But the problem this has was the time it took to resync/rebuild the array. I didn’t actually wait for it to finish, but is was slow and if the time stayed consistent I figured it would take over two days. With 10 drives in the array I figured it was possible I’d lose two drives before the array could be rebuilt. Unlikely, but possible and Mr. Murphy likes to visit.

So my third and final install was a single RAID 6 array with one hot spare. RAID 6 provides two parity bits so in effect I’d lose two drives to data protection. It’s possible that the writing of the parity bits would slow write performance but this might not be any worse than RAID 5. With the hot spare I lost another drive to data protection. So I’d lose the equivelant of three drives to data protection. But I figured the hot spare could take over when the first drive is lost and not require me to do anything. If a second drive was lost before the rebuild finished then I’d still be covered. Since I used duplication for everything on WHS this is still an improvement. Plus I can add more drives without losing more drives to data protection.

Installation

The installation was well documented in the Ubuntu docs so I won’t repeat it all. The Advanced Installation procedures provide all the information needed to configure software RAID. I did enable booting while degraded for the OS drive. Also, it wasn’t obvious to me from the instructions, but the LVM configuration is during the same session as the RAID configuration and I couldn’t save the RAID configuration until the LVM volume was configured (well, I could save if I wanted to skip the LVM configuration).

In my configuration all ten 2TB data drives are configured as a single RAID 6 array with one hot spare. That RAID array is then configured as one single logical volume which is mounted as /home.

I used LVM up front as I think this will give me more flexibility in adding drives. I can add drives without being forced to add them to the existing array. I may never need the flexibility, but it doesn’t seem to add any complexity.

I let the Ubuntu installer install SAMBA and OpenSSH. I’ve never installed SAMBA so I don’t have any pre-conceived notions about how I want it installed. I do plan to install a full LAMP stack but since I want that done my way I’ll do it manually.

I’m still playing around with Samba so will leave leave my discussion about that  for another time (besides, this is long enough now).

Post Installation

The initial build (aka sync) of the array takes a long time. Mine took roughly two days. The drive can be used during that time and i n fact I had my file copies running almost continuosly for the first day. The performance I saw seemed fine (was actually faster than my WHS) although it probably extended the sync time. Also be aware that the sync time doesn’t depend on the amount of real data. It just build parity for all the bits, whether or not they contain a file. It’s not like WHS file duplication which just copies a file to a second drive.

Ubuntu (actually I think any Debian based Linux) schedules a RAID check to run on the first Sunday of every month. I noticed this running and based on it’s scheduled start time it took about a day and a half. I didn’t have a noticeable impact on performance (based on my usage of the server, not any actual benchmarks). But for me a Sunday is actually a heavy usage day for me since this is a home server so I’ve removed the task from the cron schedule. To do this I commented out the line in /etc/cron.d/mdadm (by adding a # at the beginning). I didn’t want to delete it as it’s something I may want, just with a different schedule.

Open Questions & Resources

The screen shots below show today’s disk usage and RAID status:

filesystem

mdstat

I still need to go through the process of expanding a drive volume. I also want to go through the process of replacing a drive in the RAID array, I did disconnect and reconnect a drive to test the RAID but it was the same hardware.

I’ve had some issues with Samba. Once, it seems the only way to stop and restart it is to kill the processes and then restart them (seems risky) or reboot. None of the procedures I’ve found work for me. Luckily, now that it’s configured I don’t need to make changes. But in the beginning I was rebooting each time my config changed (including adding shares). My use of Samba is limited to file sharing.

I’ve also had an occasional problem with Samba if there’s a problem at a PC as it’s copying files to the server, For example, a copy was in progress when I lost power and my unprotected (by UPS) iMac died mid-copy. I had to use terminal to connect to the server and then use sudo permissions to delete the directory that was being copied to. My ID didn’t have permissions (I could have just modified permissions, but the copy was bad so I deleted it).

Other than that I’m happy with performance. Knock wood, cross-fingers, wave rubber chicken, etc…

My biggest problem is that the amount of data on the server has gotten ahead of my testing and ability to recover quickly if there’s a problem or if I decide an Ubuntu Home Server isn’t the way to go. I no longer have the ability (or hardware) to keep all the files on another server that I can quickly switch to. So I’m more committed to this path than I was to Vail.

Some additional resources I found helpful:

This is another Ubuntu software RAID article, but based on Ubuntu 9.10. Includes some screenshots so you’ll have an idea of what to expect.

Still on my todo list is a review of the disk SMART tools discussed here.

While not mentioned in this article, I use a CyberPower UPS on the server and their Linux software is here.

3 thoughts on “Ubuntu Home Server: The OS Install and RAID Configuration”

  1. I'm running a similar setup. I use SpiderOak for offsite backups, works great. I have a mix of Mac's and win7 pc's at home so use the built in windows backup (professional only) and I've got the server acting as a time machine device so that handles those backups.

  2. I'm running a similar setup. I use SpiderOak for offsite backups, works great. I have a mix of Mac's and win7 pc's at home so use the built in windows backup (professional only) and I've got the ubuntu server acting as a time machine device so that handles the mac backups. Try doing all that with windows home server!!!

    Oh and top of the the server is running zoneminder so I have a full CCTV system too 🙂

    1. @Steve, first I've heard of SpiderOak and it looks pretty cool. On my short list to take a look at. Your setup sounds sweet.
      Thanks, Ray

Comments are closed.