As I mentioned in my last Trail Log, the latch on the drive bay holding my OS mirror broke, and a drive popped out. The mirror worked as expected and things kept running.
I was able to replace the cage and rebuild the mirror. But this brought up the question of what if it was the controller that failed or the OS became corrupt? I’ve yet to test a server restore. It’s a long weekend and now’s as good as anytime to test. If I need to reinstall and manually recreate users and shares I have the time. Recreating shares and users would never be fun, but at least it would be a time of my choosing.
No sense tempting fate and I did check to make sure that the resent backups worked without error which is a luxury an unexpected failure won’t allow. To simulate the failure I’d delete the mirror and recreate it from scratch including an initialization. While the same hardware, it would be as bad as starting with a new controller.
This is how I have the server backup configured (double click for bigger picture):
The backup goes to an external USB drive and runs at noon and 11PM. The backup has been running reliably and not reporting any errors. So I was ready to check the reliability of those reports.
The restore wasn’t straight-forward but it wasn’t too complicated and it did work. After the restore I had a working server with all my shares and users.
Some tips from my experience:
I needed to recreate my install configuration. When I installed I only had the system drive connected. When I tried doing the restore with all drives connected (excluding all but the system mirror as a restore location) the restore process stopped by saying there were no suitable drives for the restore. Removing power from all but the OS drives solved the problem. I suspect this is because with all the drives connected the OS mirror wasn’t seen as the primary drive (just like during an install with all the drives connected).
The repair process didn’t always find the system backup on the external drive on the first scan. Sometimes I had to force a rescan then it was found. If the scan was quick I knew it hadn’t looked hard enough and told it to look again. Annoying but it just took persistence.
I still needed to load the drivers for the OS Raid Controller before it could be found. So any drivers needed for the original install will be needed for the restore. Although once the restore is done whatever drivers were installed on the server will be used.
The restore itself was quick, taking less than 15 minutes once the drives disconnected.
The first reboot after the restore failed with a bootmgr not found error. But it’s been fine since then.
The times displayed for the image backups (indicating the time the backup would be made) were GMT –8 hours, which is not the same as my server (GMT –5) so the times appeared a bit off until I read the offset and realized why. (Redmond centric I guess)
The restore is back to when the backup was made, so any data that changed since then (such as for add-ins) will be lost and have to be recreated.
So, for that last bullet point: Cloudberry saves its information to the C: drive so after the restore I did a repository sync to make sure it was all up to date. But the restore itself worked fine. All my backup plans and repositories were still configured.
I don’t have any other add-ins but if there were any they maintain data on drive C: it would need to be refreshed.
The bottom line – I’m happy to know the server backup actually works. Not that I doubted Microsoft (no, really) but nice to know it works with my hardware with my configuration.
Newegg has the HP MicroServer on sale through May 31st. Promo code SEVMEM15 will take another 15% off the $300 price. I think these servers are a great value where you value space and economics over CPU power. I’ve already run through several OS installs on the one I have and fell in love with this little box.
The Memorial Day holiday is upon us here in the U.S., signaling the start of summer. The local forecast promises the weather will be perfect for barbeques and picnics. Between flipping burgers and enjoying the great weather there might be some time for tech. If so, I’ll continue on with these recent projects.
Windows Home Server 2011
It seems like forever, but it’s only been since April when I upgraded to WHS 2011 Gold and overall I like it. I’m still using it primarily as a file server.
Performance has been good (not great). I’ve noticed occasional slowness but I haven’t found a cause. It may be my PC or network and unrelated to the server. I need to spend some time benchmarking/testing the drives but so far the problems vanish before I get annoyed enough to dedicate time to the problem.
Ubuntu was faster but I think that’s due more to configuration, and I could change WHS 2011 to match but may not want to. On my Ubuntu Home Server the drives were all on big RAID 5 array. So despite the data protection overhead, reads and writes were across all the spindles. In WHS 2011 most of my drives are alone and contain the entire contents of a share. So while I might occasionally do concurrent copies from different drives, most of the time it’s one spindle handling all the disk I/O.
I’m not sure I’m put out enough to rebuild using RAID 5. The RAID would be through the WHS OS so would be comparable in configuration to my Ubuntu setup.
I have had one low tech problem. The other day I walked into the room to see one of the two 2.5” OS drives had popped out. It was part of a mirror so the server was still running. In this case the mirror saved me, but also caused the problem. Somehow the cage latch broke so the drive popped out. The RAID kept things going but I’ll need to replace the drive cage (with a different model).
Project Web Server 2.0
While hopefully unnoticeable, this website is running on a new server. Still a VPS at Linode, but new.
I was running an old version of Ubuntu that was out of support and I’m paranoid about security updates so it was time to do something. I reviewed my OS options and decided to go with Ubuntu 10.04.2.
So I fired up a new VPS and copied my server over. This did require some down time, the first in over 300 days. The server is small, so it only took about 5 minutes to clone and get the server back online. Then I did the OS upgrade on the new server. Once I handled the inevitable upgrade issues I changed the DNS to point to the new server.
I messed up the DNS change by forgetting to click a button and actually push out the change for this site. So when I shut down the old server the site went down again for a couple minutes. Once I had the old server running and realized what I did (or didn’t do) I clicked the button and waited another couple days to make sure the DNS change propagates everywhere before shutting down the original server.
Now that that’s sorted out I’m rebuilding the “old” server again. Maybe it’s my Windows background, but I always prefer clean installs over upgrades so I want to do that. Plus there’s some plumbing changes I want to make and it will be safer to do them before the server has any sites on it.
The latest addition to the home data center was the HP MicroServer. It’s become my favorite piece of hardware, ever. I don’t think it’s a secret that I write articles in advance and schedule them to be published. By the time my HP MicroServer trials was published I had a second one on order, this time with a plan.
As for the first one, it’s still running Microsoft Small Business Server 2011 Essentials. It’s also still taking it’s first backup from my Windows Home Server. It’s taking a long time, plus I’m doing two backups totaling over 10 TB. One is to the Drobo and the other is to the 9 TB scary RAID drive so I can test that out with a bunch of files.
The slow file copies brings up…
I think the slowness is due to backups just being slower in general (due to error checking, file tracking and so on) but my home network needs work as the two servers are two switch hops apart from each other despite being in the same room. So I also ordered a 24-port switch to consolidate everything into one switch. Twenty four ports is clearly too much but, at $140 the well reviewed D-Link DGS-1024D was a better value than smaller switches. If the “Green” moniker is more than marketing it may use less power than my current daisy chain of switches. I’m hoping that the single switch will reduce latency and improve performance. But if nothing else, it will reduce the power cable tangle.
I’m going to be looking at the second HP MicroServer as a router/firewall. My network is small so I think the server will be able to handle it. The main problem will be the number of network ports (1 for management, 1 for the cable modem, 1 for the LAN). I could probably live without the dedicated management NIC but it’s preferred. Also, today I have cable & DSL which would require a second WAN connection, but I’m hoping to drop the DSL once I get a better handle on usage. Four ports is doable, but would be a hassle so I’ll hold off on that until I do some more testing.
Windows Home Server V1 Countdown
My Version 1 Windows Home Server still lives, but the end does approach. Currently it lives as a backup server. It wakes up in the morning just before the WHS 2011 backups run and then goes back to sleep when they’re done.
But once the backups are moved over to the MicroServer I’ll shut this box down for a cleaning, testing & rebuild. While the server has been reliable for awhile, every time I open it up for a change I seem to have a different problem when I’m done. So the last bad hard drive is still in there. It might be bad port or cable but I’ve been afraid open it up again. I’m leaning toward resurrecting Ubuntu Home Server on this rather than keeping WHS v1.
The June Quest
I’ve got 3 pretty significant projects underway. The HP MicroServer project will be a a lot of fun and something I could get lost in for days.
There’s the network rebuild project. The new switch should be easy and clean up my shelf space if nothing else.
But then there’s the router/firewall which I’ll dub MicroRouter since it will be on a MicroServer. I had trouble getting my DSL to work with Untangle but I don’t want to spend a lot of time fighting that. I’m going to try other software (and Untangle again). While I may dump DSL, the slower DSL is suitable for testing and I can use it while everything else stays online and segregated from my testing.
Then there’ the web server rebuild. Another fun project but one that suffers from scope creep as I try different things. The longer I take the more it costs me since I have two servers so there’s an incentive to focus and get it done.
I bought the HP MicroServer after reading some reviews and listening to a podcast that intrigued me, but without any real plans for it. This server server seems perfect as a test box to try things out. Small, low powered but with potential.
I spent time this past week trying different OS’s and configurations and ended up with a decision. At least one for the next couple of months.
As I researched the server I noticed there where a number of people doing (or planning) significant upgrades such as adding RAID controllers and other hardware. While I certainly understand the desire to do something “because I can”, for me any significant upgrades or modifications negates the benefits of this box. I did max out the memory since I typically always install the maximum memory, or as much as I can afford. Can never have too much RAM (imo). I knew I’d need to be adding hard drives but I don’t want to buy any other hardware for this.
I ended up using 3 TB drives in the final configuration although that was a last minute change when they went on sale and most of the testing was done with 2 TB drives.
The server does support RAID through the motherboard but it’s “fake raid” rather than true hardware raid. Only RAID 0 and 1 is supported. RAID 0 is stripping and is done for speed, while RAID 1 is mirroring. Hot swapping is not supported. Having been burned by motherboard RAID too often I’m not even trying the on-board RAID.
All the Microsoft software was from current Technet downloads while the Linux software was the latest release of that version from their website.
My first install was Citrix XenServer. I added three 2 TB drives to max out the internal drive bays. The installation went without incident and XenServer was quickly running. By default Xen set up a separate local repository on each of the 2 TB drives.
I installed a Windows 7 VM just to make sure I could and it booted fine.
Since the box is a low power server it’s not really something I’d expect to work well with numerous VMs. I could probably keep a few test VMs that I would fire up as needed, but that’s all I’d expect. I installed Xen first since I knew I’d be moving on from it. I wouldn’t rule out using Xen to use this as a low-end test platform for a vm or three but it’s not my first choice.
I moved on to the next OS which was…
Windows Server 2008 R2
Again, the Windows Server 2008 install went without incident. No special drivers needed during the install or post-install. HP does have recommended Windows drivers on their site (actually, they link to the AMD website). I stuck with the drivers bundled with the OS.
Once I got all the updates installed I used the software RAID in Windows Server 2008 R2 to create a RAID 5 array with the three 2 TB drives.
Everything appeared to work fine. While I didn’t benchmark, file copies and server response was acceptable.
Then it was on to…
I wasn’t able to install CentOS 5.6, with the installer telling me it needed some drivers. Since I had already eliminated CentOS from consideration for my web server I immediately moved on to…
Ubuntu 10.04.2 LTS
This install went fine. I configured the three 2 TB drives as a RAID 5 array and configured for LVM during the installation.
All seemed fine and performance was acceptable. I decided to try hooking my Drobo up to the server and use Drobo-Tools. It did work as expected. The problem is expectations are low since there are limitations with the Drobo under Linux. The Drobo formatting was painfully slow and I moved on to Windows Small Business Server 2011 Essentials before it finished formatting. Check out drobo-utils if you want to run a Drobo under Linux.
Windows Small Business Server 2011 Essentials
I saved this for last as it’s the one I expected to keep on the server. Since it’s based on Windows Server 2008 R2 which already installed fine I didn’t expect any problems.
This is where I changed up the configuration a bit. The HP specs specify a 8 TB disk max (4 x 2 TB). I’m using the original 160 TB drive for the OS and then three 3 TB drives in the internal bays. I haven’t had any problems with them so far.
I’m not a fan of RAID 0 (stripping) but to test out the limits I used the three 3 TB drives to setup a 9 TB RAID 0 (software RAID through SBS 2011). I consider its nickname to be appropriate – scary RAID. So far it seems fine. I’ve been reading/writing to it fairly regularly without any errors. It appears the specs were based on the math at the time they were written and not any actual hardware/BIOS limitation, four bays with 2 TB drives being the max at the time. I should mention I did upgrade the BIOS to the latest version before I started all this but didn’t see any mention of drive support.
I also moved my Drobo from my Mac Mini to this box, connecting via USB. The Drobo has never been fast, even on FireWire and it will be even slower on USB. But it will give me a good sized file repository for local (as in in-home) backups. Not something I need speed for. The Drobo dashboard does install fine on SBS 2011.
I haven’t spent any real time with SBS 2011 itself, concentrating on testing the hardware for now. I did set up backup for one test virtual machine. It seems to work fine but I’ve yet to do a restore.
Plans for the HP MicroServer
I’m currently backing up my Windows Home Server 2011 to both the Drobo and scary RAID, mainly as a way to load up a lot of files and stress the server.
I plan to rebuild/rework my current WHS v1 box which now serves as a backup destination for my WHS 2011 files. So I’ll be using the Microserver as a backup destination during this rebuild. Whether SBS 2011 sticks around after that remains to be seen.
It does have a better than even chance of sticking around. I have the Office 360 beta (which I’ve yet to use) and it’s supposed to integrate with SBS. So I’ll Want to try that out if only oust of curiosity.
I’ll probably change the RAID 0 array to a RAID 5 array, although having that 9 TB of space is sweet. Less sweet would be losing that entire 9 TB when just one of those drives fails.
I’m trying to avoid spending more money on this box but if I do end up using it for something that requires reliability I’ll consider buying a 160 GB drive to match the one it has and mirror the system drive and then mirror the remaining two 3 TB drives. As an alternative I could avoid the OS mirror and just rely on backups should the OS fails. This would significantly increase my space for data.
It should be noted that 3 TB drives will have a problem with the built-in backup software as there’s a 2 TB limit for backups. Because of the way I do backups this isn’t a concern for me.
Overall I’m happy with the flexibility of the box. I never expected performance and I’m not considering considering using it for anything that would be CPU intensive.
Other than CentOS every install was straight-forward and lacked an hardware issues. A Google search shows that people are using CentOS on the hardware so there’s nothing inherently wrong. HP has a CentOS support pack and the answer is probably in there but I didn’t pursue this in the interest of time.
I’m not sure I would change anything with the HP MicroServer. There’s a few things that might be nice to have, but I wouldn’t pay more for them. It’s kind of nice to have the limitations of the box to keep from usual, inevitable project creep.
I recently wrote about the most expensive tech podcast that I listen to, here’s the additional tech podcasts I regularly listen to. While some of these have both video and audio versions I stick with audio as I listen in the car or when working at my desk.
The previously mentioned BYOB podcast is one that I pay attention to when I listen and frequently replay it or take notes. It’s also one of the few podcasts where the back catalog is worth revisiting. The only other podcast in my list with that distinction is Security Now!.
Security Now! is on Leo Laporte’s TWIT network and is hosted by Steve Gibson along with Leo. See GRC.com for more info about Steve. I’ve been listening to the weekly podcast since the beginning and the podcast has changed with the times.
These days every other show is a listener Q&A. The shows begin with a review of the week’s security news and exploits. Then Steve dives into a topic in detail. Steve does a good job of explaining technical topics. Not everything is strictly a security topic. For example, there was a series of shows about the foundations of computers.
Steve also provides a transcript of all shows which is useful to find information from past shows.
The Home Server Show podcast is naturally about Windows Home Server but does include more general home server related topics. I’ve been listening since the early days of the show. Early enough to be able to have caught up with past shows.
The show is weekly and an hour or longer. They cover Windows Home Server news at the beginning of each show the dive into a topic or interview. They can also cover WHS related areas such as Media Center and how a WHS fits into the home. They’ve recently added an “off-topic” section where the hosts talk about non-WHS stuff.
While it’s certainly WHS focused I like the show because it’s not laser focused on Microsoft WHS. Much of the discussion can be applied to non-Microsoft solutions. Examples include discussions on RAID, DVD ripping and Drobo.
The Home Server show spawned the BYOB podcast and there’s some crossover between the two of them. BYOB generally spends more time getting into the technical discussions.
The only remaining Mac podcast I listen to is The Maccast hosted by Adam Christianson and is “about all things Macintosh”. Although these days a better tagline would probably be “about all things Apple”.
It’s a weekly podcast that’s usually about an hour. Adam provides a mixture of news and tips.
Windows Weekly is a weekly (duh) podcast with Leo and Paul Thurrott. While it is windows centric, Paul’s topics cover technology in general. While he usually relates it to Windows users, topics can include Apple hardware, Android phones, non-windows tablets and more.
I’ve been looking for a Linux podcast to add to my rotation. Linux Outlaws is a recent addition to my list. I’ve only listened to a couple shows so far but it’s been interesting and I’ll probably stay subscribed and go back through older shows to see if any seem interesting. The format is news, reviews and interviews. The show has the explicit tag in iTunes due to language.
Despite the show graphics and website having a western theme (as in cowboys) the hosts are from the UK and Germany.
The above podcasts are the ones I always listen to. The rest of these podcasts, while tech related, are in my “fluff” category. I may listen to each show but much of the time it will be background noise while I do other work.
I listen to two daily tech news shows – Buzz Out Loud (BOL) and Tech News Today(TNT). BOL is from CNet while TNT is from TWIT and is hosted by a BOL alumni. Both are less than an hour, depending on the news that day. There’s some overlap and I should probably pick 1, but I can’t decide. I’d say BOL mixes in more lighter news and tries to provide entertainment in with the news.
Leo Laporte’s TWIT network has numerous podcasts and I listen to a few. They are spotty in my opinion. Leo is one of the hosts on most of the shows I listen to. While they are good he has a tendency to get off topic (he even has a “rat hole” jingle”). Extremely annoying is his tendency to cut off his co-hosts (sometimes seems like he’s not paying attention) and not letting them finish a thought I was interested in. So when I run short of time, these are the podcasts likely to be ignored.
Mac Break Weekly comes and goes from my playlist. These days I can easily overdose with Mac news and rumors. Add to that Leo’s tendency to go off topic and I often skip this one. The Mac Break format of a bunch of people sitting around having an unscripted discussion can be interesting to listen to, but sometimes I just don’t care. The do have “picks” at the end of each show, but beyond the picks I rarely learn anything from the show.
This Week In Tech has the same format as MacBreak weekly but has all of tech as a topic. But considering the other shows, this one frequently drops from my playlist if I don’t get to it before the next show is released.
FLOSS Weekly is another TWIT show but lacks Leo. They cover a different Free Libre Open Source Software (FLOSS) topic each week. Many topics aren’t about software or projects I’ll every have a chance to use, but I like listening to the discussion.
Tekzilla is one of the few Video Podcasts I watch. It’s tech focused and covers a little news and a lot of tips or reviews. The have a short daily tip but primarily have two shows a week of 30-45 minutes each.
That’s about it for tech podcasts. I’ve been cutting back the ones in my feed and this is what’s left. Any that you’d recommend I should listen to?
As I previously mentioned, it’s time for some significant web site and server updates. I already looked at other web hosts and decided to stay right where I am, with Linode. The next step is to decide what operating system I want. With Linode, they provide a virtual private server (vps) and a selection of ready-made base OS images.
I’m currently on Ubuntu 9.10 and need to move off of it since it’s end of life’d. The obvious choice is simply to update to the next version of Ubuntu. But let’s not stick to the obvious.
This is my web server so stability, reliability and security take center stage. Ease of use goes a long way to achieving those goals but isn’t a requirement in itself. Ease of use is often the enemy of security and reliability.
CentOS has come to my attention via a FLOSS Weekly podcast and jumped to the top of my list for consideration. CentOS describes itself as:
CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendors redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.
That “North American Enterprise Linux vendor” is Red Hat.
CentOS 5.6 is the latest release and CentOS 5.x has an end-of-life (EOL) date of March 31, 2014. So I’d have about 2 1/2 years before being forced into another upgrade.
CentOS 6 should be out in early June which would give it a distant EOL date. Reading about CentOS 6 in their forums would make someone question its future. There seems to be a lot of unrest in the CentOS community and questions about its future. Most of the complaints are about delays with CentOS 6 and the lack of openness about it. I don’t know enough about CentOS to judge whether the complaints have merit or were the ramblings of impatient children. It’s not like Cent OS 5 is broken and in fact they had just released CentOS 5.6 to keep up with Read Hat’s 5.x release. While there’s no denying this has a negative affect on my choice it didn’t have a huge effect as I suspect it’s something that will blow over. Worst case, since so many web hosts use it, it will be a long slow death if the comments are true and it will be years before I’m affected.
Benefits of CentOS include reliability and stability. The cost for that stability is that the software typically doesn’t get the latest Linux technologies until the next major release which could be years. For a server OS this is clearly a benefit. I’m not looking for bleeding edge in my web server.
Another intangible benefit is the experience I would get using a Linux distro popular with business.
Ubuntu 11.04 was just released. I had thought that all x.04 release where long term support (LTS) releases but found this wasn’t the case as 11.04 EOL’s in October 2012. Rather, the LTS version is released every two years. I don’t want to go through another upgrade in a year so Ubuntu 11.04 is out of consideration.
Ubuntu Server 10.04 LTS is supported through April 2015 which means it will outlive CentOS by over a year. The longer I can put off a forced upgrade the more I like it.
Ubuntu has served me well so far and I’m familiar with it, so that’s a plus. Documentation is plentiful due to Ubuntu’s popularity. More importantly, I’ve found more Ubuntu specific documentation for things I want to do than I’ve found CentOS specific docs.
None of the other OS’s jumped out as having a reason to consider them.
It seems that Ubuntu 10.04 is the right choice. It’s a newer version of what I have now so the migration/upgrade should be easier. Plus it’s got the longest life of what’s available today. I also can’t understate the benefit of having more Ubuntu 10.04 specific documentation.
On the other hand CentOS has security, stability and reliability which are strong reasons. Another strong reason is that that I’d have something new to play with and get experience with an enterprise OS.
In the end I decided to stick with the logical choice and go with Ubuntu 10.04 LTS.
I’ve already cloned my current server. I then did an in-place upgrade of the clone to get to Ubuntu 10.04 LTS. I changed DNS to redirect the sites to the new server, but if I find problems I can easily switch the DNS back. This provided a quick way to get off the obsolete Ubuntu and if you’re reading this it worked. I’ll start building the new server once the DNS has time to fully propagate and I know all is well.
I’ll soon start building the new server. I’ll do some more research and testing of CentOS but will rebuild the original server on 10.04 LTS. Existing documentation and having more packages already in the repository tipped the scales in Ubuntu’s favor.
Why build a new Ubuntu 10.04 server if the clone is already upgraded to 10.04? I want to have a nice clean server for the future. I’ve always been of the opinion that flattening the OS every couple of years is a good thing.
There’s still a chance my urge to try something new will have me try CentOS over the logical choice of Ubuntu 10.04. Should I give CentOS a deeper look?
This is the second article in the series about my latest web server project. Find the other articles under the Web Server 2.0 tag.
One of the podcasts I listen to regularly is the BYOB podcast (BYOB = Build Your Own Box). The podcast is free, but there’s rarely a show that goes by that doesn’t trigger some techno-lust which will consume my time or money. True to form recent shows have triggered one project and one hardware purchase. The project has been consuming my time, while the hardware consumed a couple hundred dollars along with my time.
The favorable coverage on the podcast tipped the balance and I bought a HP Micro Server. It was already on my radar as a nice compact, low power server. I’ve installed Windows Server 2008 R2 and Citrix XenServer on it successfully. I’m still deciding what I want to do with it. I’m leaning towards installing Ubuntu server on it and using it as a test & file server. I haven’t tested it myself, but from what I’ve read it will support 3 TB drives.
I’m still debating how I want to use this box. At least for awhile I’ll be testing different OS’s and hardware on it. So deciding what to do with it has become a mini project in itself.
The bigger project is what the BYOB guys called a “Super Router”. I’ve been keeping track of possible home network changes for awhile, and it got more intense when I bumped against my bandwidth cap and added DSL to my cable ISP. When they talked about the super router on episode 34 a light bulb went on and I slapped my forehead. It really was the solution I wanted.
I ended up rebuilding my old PC now that all it’s parts were in the parts bin since everything has been upgraded. All I had to buy was a couple new network cards so at least the expense here won’t be money, just time.
I did install Citrix XenServer on the box and created a pfsense virtual machine. Unfortunately I couldn’t get pfSense to work with my DSL. It did work with the cable connection but I really want to use the slower DSL for testing. So I’ll eventually return to the problem and try to get it to work. But at least I know I have the necessary hardware.
I want to keep it virtual so I can easily swap configurations and test different software. For this I want to use bare metal virtualization software as I would expect better performance and reliability.
The router project will keep me busy for awhile. Besides pfSense I want to look at alternatives. Luckily I have a several months before my DSL promotional pricing ends. I should get this running and get a better handle on my bandwidth usage before regular pricing kicks in. While the DSL problem is frustrating (it should work!) and may be something obvious I’m missing it’s an otherwise enjoyable project. I can easily get lost in the settings and testing for hours.
As for the HP Micro Server, it’s going to be a nice compact test box that doesn’t take much shelf space. While I could add external drives that defeats the purpose of it being compact. I broke with tradition by not having a specific purpose in mind when I bought this. Despite that, I don’t have any buyer’s remorse.
Hopefully the BYOB guys will stick to general tech discussions until I get these projects done. If they can’t do that then they can stick to graphics cards. I’m not a gamer and can’t get excited about graphic cards, so it’s a safe topic. Of course, after their latest episode I’m now fighting a urge to upgrade my SSD drive.
I recently went through and evaluated various hosting options for my sites. I’ve got no complaints with my current host, Linode. As the screenshot up above shows, the server has been running for nearly a year. They’ve had a couple network issues over the past year but I honestly couldn’t remember if they affected me so I checked Site Uptime and the last time it showed the site down was Aug 7, 2010. I’ve had more problems related to my own home router or ISP. So why the review? It’s time for some fairly significant upgrades which will be made easier by building a new server and then migrating the data (at least that’s my current thinking) so now’s a good time for a move.
What I Have Now
Linode provides a bare-bones VPS (virtual private server). They provide the base OS, I do everything else. This provides great freedom and I’ve learned a lot, but there’s also a downside which is that I have to do everything myself, including all troubleshooting. There’s no control panel, everything is done from the command line. I could install a control panel if I wanted to. There are several open source options available so the financial cost could be low. While a control panel could help me get some things up and running quicker and more reliable I’ve yet to really explore those options until now.
Since I have no complaints about Linode, from either a performance or cost perspective, I won’t be looking at other bare-bones options.
What I Looked For
I like having the VPS so I’d be sticking with that. The main thing I’d want is a host that would take some of the work off my plate and allow me to do things quicker. So I looked for managed VPS providers or those that offered a control panel in their price. I found than many managed VPS services also required a control panel and this was typically either from cPanel or Parallels Plesk. This control panel requirement was especially true of the lower cost managed hosts.
The cost of a managed VPS is significantly more than I’m paying now. At least for what seemed to be reliable vendors. Moving to one of these would at least double my cost so there would need to be a significant benefit to me.
I narrowed my selection down to two: Knownhost and Servint. I signed up with Knownhost because they were cheaper and provided a 30-day money back guarantee. They had things set up quickly, within 9 hours of my registering (they say between 12 and 24 hours depending on what you read). I picked cPanel as my control panel mainly because it’s more common and I used it back when I was on shared hosting. They were even quicker in processing my request to cancel my account and request a refund (within 30 minutes of my submitting the ticket to cancel).
Why’d I cancel? It’s not because I disliked Knownhost itself and the fact they made it easy to leave is a bonus in my book. I left because I decided that a control panel and a managed server wasn’t for me. I don’t spend all that much time doing maintenance and have automated many of the routine tasks. In the end, there’s little real benefit to me and a significantly greater cost. I like doing things myself but didn’t like having to figure out how cPanel was doing things. If I change my mind about that I would be more than happy to return to Knownhost.
As a disclaimer I should mention that I looked at the server for a day before deciding to cancel. I’m sure there were ways around my issues, but having to figure them out defeated my time saving goal. Most of my issues was the way control panel sets things up. I never took advantage of the managed service to do any of my installs.
If I split up my sites into separate accounts they actually became harder to upgrade. I keep WordPress up to date using Subversion and already have the scripts to do it quickly. I could do the same on the new server but that begs the question – why use a control panel? And generally speaking the “managed” part of the offering doesn’t apply to things not installed through the panel.
I also had issues updating the plugins through the built-in WordPress update feature and was prompted for FTP information. I suppose could have provided it but I don’t really like entering FTP account info into a web page (or even having regular ftp enabled although SFTP may work). This appears to be because Apache runs under one ID (root on that server) while the files being updated are in a directory owned by another user. Again, I could probably work around this but the more workarounds I use the less beneficial the control panel and managed service is.
The bottom line is I decided the added cost, both financial and the learning curve, didn’t justify moving from what I already consider a solid host. I’ll be staying with Linode.
An added benefit of Linode (and probably many other VPS providers) is I can set up a new server to do my build and testing. I can then either change DNS to the new server when it’s done, or clone the disk to the old server and reboot with that disk. Another option would be to change the IP address. Changing DNS would be easier as it would eliminate down time (at least in theory, if I don’t screw it up) although I’d probably avoid any site changes for a couple days while the DNS full propagates.
I went into this with part of me figuring the added cost wouldn’t be worth the effort so part of me figures I wasted a few hours. On the other hand I got this post out of it and I scratched an itch and put to rest any consideration of moving hosts.
Earlier this morning LastPass announced that they noticed some anomalies in the network traffic to one of their servers. And…
… it’s prudent to assume where there’s smoke there could be fire.
I’ve been a longtime LastPass user and fan. While I rather this not have happened at all, I’m an even bigger fan now. I like paranoid people protecting my stuff. I also think some of the stuff they do is pretty cool and shows a serious commitment to security. They monitor traffic in their network and noticed some abnormal traffic that they couldn’t track down.
Unfortunately their response caused the real problems. They began forcing password changes which caused a heavy load on their servers (which was probably already heightened once the news hit) and things began to grind to a halt. It appears password changes could take an hour or more to take effect, making it appear data was lost (since it wasn’t being decrypted with the right password).
I have to admit, I didn’t have any problems during the day the few times I used LastPass. And when I got home they changed things from forcing a password change to selecting an option to not change my password or to temporarily postpone the change and only allow logons from personal computers. I chose the permanent postponement. So did I permanently postpone the change”?
The worst case risk is that someone got the password hash (the actual passwords aren’t saved or known to LastPass) and the salt used to hash them, LastPass needs to keep the salt in order to log us on. With both those items a dictionary attack could be launched to find the password. Only passwords that matched the dictionary could be broken. I’m protected by two things:
My password is a long string of symbols, numbers, and both cases of letters. Not likely to match any dictionary.
I use a Yubi-key for two factor authentication. If my password is cracked it’s useless without the Yubi-Key
Still, once things die down and their performance returns to normal I’ll go ahead and change my password. Can’t be too cautious. And the LastPass folks get that – they’re changing their hashing algorithm in a way to make brute force attacks unreasonably long to execute.
Unlike other recent breaches in the news, this possible attack hasn’t lessened my trust in LastPass. It’s only increased it because they take their responsibility seriously.