Path Finder 5 Released

Cocoatech has released a new version of my favorite OS X File Manger. This is a paid upgrade ($20) from earlier versions, and $40 for new customers. I installed the latest version through Path Finder’s own software update feature and this installed the thirty-day evaluation version. It’s pretty much a given that I’ll buy the upgrade but it’s nice to be able to check it out. Path Finder 5 requires OS X 10.5 or later. Tiger users need to stay with PF 4.

I’ve just installed the upgrade so I can’t say how well the features are implemented, but things I’m looking forward to are:

Dual Pane File Viewer – the drop stack is handy but sometimes I was to compare two directories and now I can do it side by side rather than opening a second browser window.

Network Share Browsing – It’s nice to be able to browse network shares in Path Finder itself. It was annoying to have to switch back to Finder (or if it could be done in the previous version it was well hidden.) Screen sharing can also be started directly from Path Finder. (It wasn’t obvious to me at first – Under “Shared” there are two entries/icons for each device found that supports both file and screen sharing. One is for screen sharing the other for files.)

Cut/Past of files in the same browser window – I used to swear every time I tried to do this but since I’m now out of the habit I wonder if I’ll even notice.

There’s a bunch of other smaller changes and Leopard optimization along with Leopard eye candy and a coverflow view.

I’ll be interested to see how people react to this being a paid upgrade. From the perspective of what I see there are some nice additions here, but I’m not sure the visible feature list justifies a full version bump. It does require OS X 10.5 so it probably needs a full version bump for that reason alone but is it worth paying for? Well, yes, at least for me. I’m not sure when version 4 came out, but version 4.01 was released in January 2006 so it’s been over two years. I started using it in January 2007 and every upgrade since then has been free. I have no problem spending another $20 at this point for an updated version. It does appear this version took a lot of code re-writing for Leopard optimization and it’s new file share abilities. No word (that I could find) about a free upgrade for recent buyers.

On the other hand twenty buck is twenty bucks. Since I have a 30-day grace period I’ll probably check out Forklift which has a 15-day evaluation version.

Hey! I’m a Rackspace Customer

Rackspace, a managed hosting provider, has acquired to companies that I use. I recently moved this site to Slicehost and now Slicehost is a subsidiary of Rackspace. Then I saw that Jungle Disk is also now a subsidiary of Rackspace.

Jungle Disk is an online storage and backup solution that’s cross-platform. Although I bought their software long ago I hadn’t been using it until I recently started testing version 2 of their software.

While press releases and and company comments about this type of thing are always positive there doesn’t appear to be any warning bells I should worry about. It appears to be business as usual but with greater ability to expand and integrate their services.

Drobo Wrap-Up

Drobo graphic It’s time to wrap-up my Drobo thoughts and observations after having my Drobo for a couple of months. I first wrote about the Drobo at the end of August. At that time I was seeing erratic performance and was less than enthused about the Drobo. Then at the end of September there was a new Drobo firmware update and I saw better and more consistent performance. (At least for awhile – more on that later).

So now it’s time to wrap this up as much as possible. I’ll start of bringing those of you who read the earlier articles up to date and then I’ll  cover my recommendations and conclusions. Let me start by getting one thing out of the way – the Drobo was never a speed demon so adjectives  like “good”, “better” or “faster” are relative terms, I’m not putting a disclaimer at every speed reference. Even if the Drobo performed at the best speeds I’ve seen published it would still be slower than my Firewire 800 connected Western Digital MyBook drive. As it is now, I’ve connected the MyBook via Firewire 400 so the Drobo and MyBook each have their own dedicated port and the MyBook gets performance better than the Drobo and at the nearly same level (better writes, slightly worse reads) as the best Drobo speeds I’ve seen published.

Getting Up To Date

Nothing much changed after the firmware upgrade until the weekend of the 4th. At that point, for no apparent reason, the read speed fell through the floor. I found this interesting because Joe commented on a previous post that he was still seeing poor read speeds after the firmware upgrade although his write speeds were OK. My read speeds were even worse, I was seeing speeds under 4 MB/sec for my set of test files (mostly files of 1GB) and speeds less than 1 MB/sec in many cases. Write speeds were also down but better than reads but erratic and about 10MB/sec I did the usual things like rebooting my iMac and the Drobo itself. I did the approved shutdowns by putting the Drobo in standby. I also let the drive sit unused (but running) for nearly a day in case there was some sort of maintenance going on. (Easy enough to do since I was traveling.) But read performance was still terrible. My X-Bench benchmarks were also way down (this was after the reboots):

X-Bench results for poort Drobo performance

At this point I was pretty frustrated with the Drobo and considered selling it for whatever I could get. I didn’t feel it was a hardware problem so sending it back for a refurb didn’t seem worth the effort. Instead I decided to make one last attempt. Since it was formatted under the previous firmware I decided to do a hard reset, destroying all the data and reformat the hard drives. This would remove all traces of the old firmware. I also removed one of the hard drives (leaving three) since it would be awhile before I needed it in the Drobo. (The reset article I link to is public at this time, but the support forums and many articles are typically behind a “owner registration required” wall.).

This resulted in my Drobo returning to the speeds I was seeing right after the upgrade. Here’s the post-reset X-Bench results:

XBench-2

This was an overall improvement of over 75%. Copying the files back was done at an average write speed of 23MB/sec (various size files including thousands of small ones).

But the problems didn’t end there. A few days later, while I was testing running iTunes from my Drobo, the Drobo did a spontaneous reboot and my iTunes library was corrupted. Luckily my backup was the original location so I just switched back. I was testing so I was doing a lot of copies to max out the drive. But it had been copying for only a few minutes. I guess it’s a plus, but I couldn’t reproduce the problem after the reboot.

But that put me off using the Drobo to actually open files so I moved my iPhoto library off of it. Now I just use the Drobo for backup.

Partitioning

One area worth mentioning is partitioning. If I remember correctly the wizard defaults to a 2TB partition during setup. The problem with this is that once you get above 2TB of usable space you’ll have to deal with multiple partitions. While multiple partitions isn’t a problem in itself I find it annoying to have a random boundary which may require me to reorganize files. Drobo does say computer startup speed is affected with larger partitions. Even if I considered this a problem I haven’t found this to be the case on my iMac with a 16TB Drobo partition. It’s nowhere near the sixteen minutes Drobo says to expect and really just a couple of minutes.

During the original install (well, my latest install) I picked a 16 TB partition for Drobo but broke that into three partitions. One 500 GB partition for Time Machine, one 500 GB partition for my iMac system disk backup image and the rest as a 15 TB partition. My reasons are:

The 16 TB partition allows me to keep adding more or larger drives without having to re-partition. Because the Drobo doesn’t pre-allocate space (it only allocates space actually occupied by files) I don’t lose any space by having the partitions. For example, my 500 TB Time Machine partition only uses 83 GB because that’s all the files need at this time.

I want the partition for Time Machine because I don’t want it to grow forever. So Time Machine sees the 500 GB partition as a limit and will go through its file deletion routine when the files reach that limit.

I use SuperDuper to clone my system disk so having a partition dedicated to that makes it easier to use. If I thought I’d be replacing my current 500GB system disk I would have made the Drobo partition bigger. Since the space wouldn’t be used if I didn’t need it I probably should have made this bigger to allow for future Macs which may have large drives. If you’re curious, I haven’t been able to boot to that partition. So far the boot manager appears before the Drobo spins up. If needed, I would copy the files the replacement disk in my system or to another external drive.

I currently have three 1 TB drives in the Drobo. If I add or upgrade drives I don’t need to repartition. I can go up to the predicted 4GB drives before running into any sort of boundary.

The Drobo Dashboard will report on what’s the real available space is. Most modern operating systems should report an out of space problem without losing any files if the space really does run out. There’s also been reports that performance drops drastically when usage tops 90%.

Managing Disks

Because of the way Drobo works you lose an amount of space equal to your largest drive. Drobo classifies it as either protecting data or available for expansion but the reason is irrelevant. This is something you should think about when planning your drives.

If you fill all four bays with drives of the same size (say 1TB drives) and you want to expand your capacity you must replace two of the drives. Replacing just one doesn’t increase your available space.

If you use drives of different sizes you can start by replacing your smaller drives first. How much you gain depends upon your specific drive configuration but you can use the Drobolator to calculate the space.

In my case I used three 1 TB drives, leaving one bay empty. When it comes times to add space I’ll add a 1.5 TB drive (or whatever is a cost-effective larger drive at the time). I’ll only gain 1 TB and will be ignoring 500GB. But then when I need more space I’ll only be pulling out one drive. Since hard drive prices drop continuously waiting until a drive is actually needed is preferred. It’s possible, that when I need it the 1.5 TB drive will cost less than a 1 TB drive costs today.

Conclusion

On the positive side I’m keeping my Drobo. I’ve never lost any data saved to it (if you consider the open iTunes library as not saved). On the negative side I’m using it exclusively for backups and will no longer try to run software off it directly. Basically the same situation I was in when the Windows Home Server had the data corruption bug. The speed and reliability just hasn’t been there for me to run apps off of it directly.

I like the expandability and the fact that the drives don’t have to be similar (although my WHS doesn’t need similar drives either). I also like that it connects to my Mac directly.

If I had to do it again I’d give more consideration to buying a NAS, buying second Windows Home Server (WHS) which are about the same price, expanding my current WHS (which hasn’t gone well so far) or (more likely) just hold off and stick with plain old external drives for awhile. I realize the Drobo gets lots of good press and lots of people probably use it without a problem. But with my own experience I can’t recommend it even though I love the concept and look forward to a stable Drobo.

None of the links in this article are affiliate links but if you want to make purchase (despite my experience) and support the Quest you can click on the Amazon ad below.

The OS Quest Trail Log #35: Vacation Edition

This Trail Log comes as I’m wrapping up a perfect two week vacation. Now I know “staycations” are all the rage due to the economy and gas prices, but to me there’s no better vacation than staying home and deciding what to do when I wake up in the morning (or afternoon, or evening). The weather was great which allowed me to spend a lot of time on the patio with my laptop and a beer. Naturally a significant part of my time was spent on the quest so I’ll dive right in.

New Webhost: Slicehost

The big news, at least for me, is I’ve moved the site to a new webhost. I’m now on Slicehost, having moved off Bluehost. This is a significant change for me as I now have root access to server (well, not a full server but a virtual private server) rather than being in a shared hosting environment. The cost is significantly more in dollars (by percentage) but I consider it a better value and still a reasonable price. Since I now have a production server, getting real traffic, I’ll be able to expand and continue my Ubuntu Server Project articles.

With Slicehost I’m completely responsible for my server so if there’s a problem it’s up to me to fix it. Because of that I was hesitant to move things over and I kept delaying the cutover until I could learn more and do more testing. (A process without end) Then my Bluehost server had a seven hour outage (having had intermittent very short outages for a couple of weeks). This prodded me into making the change, especially since I’d just done a full set of backups in preparation for another test. It was time to bite the bullet. My experience and the process used is covered here.

I’ve since had to do some tuning and I’ll provide a more detailed review in the future but suffice to say I really like Slicehost and managing my own server. My own experience is that the website is more responsive than at Bluehost but if you find otherwise add a comment or shoot and e-mail to the address found on the About page.

Drobo

My Drobo adventure continues. The latest firmware had improved my performance but just as my vacation began it dropped drastically and performance was terrible. I should probably do a more complete post to wrap up my Drobo experience but for now I’ll just say I’m disappointed. It’s a device I bought because I expected simplicity yet I’ve expended considerable time and effort working with it. I ended up doing another complete reset and formatting which restored performance to previous levels. The one “change” here is that the Drobo had been partitioned and formatted using the older firmware. Now it’s all Firmware 1.2.4.

But that wasn’t my only problem. After the reset/format I was testing running iTunes off the Drobo. The Drobo did a spontaneous reboot and corrupted my iTunes database. It’s in the same UPS as my iMac so it wasn’t a power problem. Since I was testing I had the original files serving as my backup and simply switched back.

I still have the Drobo but I’m using it simply to hold backup files. Not data files are ever opened directly on it. I’m considering selling it but before I did I’d want to go through their tech support to eliminate hardware as the cause. I don’t think it’s hardware and I don’t want to deal with tech support at this time. It seems the problems always occur when I’m busy and just want things to work. While resetting the Drobo took nearly a day it was very little of my actual time. It’s not like I needed to sit there while files copied. It’s now been stable for about 10 days.

Windows Home Server

I gave in to the urges and purchased one of the new Seagate 1.5TB hard drives for my Windows Home Server. It continues to run quietly and without complaint while serving up files.

I had a bit of good news/bad news on the WHS drive expansion front. I’d previously mentioned my failed attempt to add an eSata enclosure to my WHS and that I’d have to send it back. There were only two things that I wasn’t able to eliminate as the cause of the problem. One was the external cage and the other was the Windows Home Server itself (or the SATA controller/external port in it). I sent the enclosure back for a replacement. After sending it back I was unexpectedly able to borrow another external enclosure. I also had the file corruption and since the enclosure worked for my friend it meant the replacement I’d be getting would be useless for me unless I got the WHS fixed, since it as likely the problem. The good news comes in because NewEgg was out of stock and couldn’t send a replacement so is refunding my money and I avoid the restocking fee. This gives me more freedom to decide what to do.

I really should deal with this before the warranty expires in December although it’s not a high priority in the grand scheme of things. I’ll open it up and check cables and connectors sometime soon. Maybe I knocked something loose when I upgraded memory. Being without the WHS while it’s out for repair would certainly suck. Sending it back could be a hassle. While the memory upgrade no longer voids the warranty I’m still going to want the original memory back in there. I don’t think I can trust HP (or any vendor) to send it back, especially if they go the refurb route.

This also made me think about the cost of backups and redundancy.

The Cost of Backups and Redundancy

As I was thinking about the hassle of sending my Windows Home Server out for repair (and considering living with the problem in order to avoid that hassle) I started to think about what would will happen in the event of a complete failure. I have backups of all the files so it’s not the loss of files that worry me, but rather, like sending the server out for repair there will be a time when it’s unavailable. My initial reaction to being without it for several days (a week or more?) was not good. But then I started to think about the cost of preventing that extended outage.

In addition to being used to backup files it’s the primary home for my video collection. I’ve been moving my DVDs to files so I can watch them wherever and whenever I want and the WHS is where they live. If the WHS fails I’m back to pulling out DVDs (which are now packed away in boxes) which isn’t really a huge problem although I find it easier to find something to watch by flipping through the menus than looking at DVDs on a shelf. I would lose the sync with my Apple TV which is nice since it makes it easy to go through a season of TV episodes as it keeps track of what I watched. Still, not a huge deal. Also, as a couch potato it’s nice to not have to get up and swap DVDs.

But let’s say I found the temporary loss unacceptable, how would I avoid it?

Well, I could buy another EX470 and keep it as a spare. When the first one failed I could move the drives. That *should* work just find. But that would be ~$500 for hardware just sitting there waiting for a failure.

As long as the HP470 is still being manufactured I could just buy a replacement when mine fails. Same cost as having the spare, I just don’t spend the money until it’s needed. I’m not sure how tied the install is to the hardware but it’s Windows so I’m assuming moving to different hardware won’t be smooth. The price for waiting in this case is the delivery time (for internet orders) or paying bust-out retail if I find it in a brick and mortar store.

Assuming I wanted to spend now I might be better off building my own WHS (replacing the HP). That way if a component failed I’d be able to just replace that component. Even with overnight shipping this probably costs less than a non-warranty repair. But this goes against the whole ease of use thing I look for from WHS. Building one would be fun, but doing it at this point is not a realistic recovery solution.

I have to say, I don’t find any of these worth the cost when compared to one or two weeks without the server. But it did start me thinking about reliability and redundancy now that more of my home life relies on computers. It also puts the server loss in perspective. It’s one thing to bake the cost of redundancy and recovery into a business when you can relate it to the potential dollars (or customers) lost. It’s another thing to compare the cost to inconvenience or the loss of leisure activities.

My Windows Home Server will fail if I keep it long enough. When it does I’ll regret not spending more to prevent the outage but I can look back at this post and remind myself that it wouldn’t have been worth the cost. My solution? In a couple of years something better will come a long and make me want to replace it. Until then, fingers stay crossed.

That’s it for this edition of the Trail Log. Happy trails! (sorry – that’s the post vacation beer, maybe Google needs to come out with blog goggles.)

Resurrected Websites

I don’t have any sort of links page, or sidebar links, on this site (something I should remedy) so I just wanted to give notice to a couple resurrected sites.

RUCYRIOS.COM is a website by a friend of mine. While I use WordPress to keep site setup and maintenance to a minimum he’s coded his HTML and Java by hand. The site has art and photos, humor, and commentary along with unique interface.

Big Waste of Time & Energy is a site I tried awhile backed but I ended up killing it off. It’s back as a my blog about anything except tech.

Attack of The Hard Drives Redux

While the mess of hard drives has been cleaned up on my desk my hard drive addiction has only grown worse. I currently have one 1.5TB drive, nine 1TB drives, eight 500GB drives, a 160GB drive and a 320GB drive that I consider “in use”.

My iMac has an internal 500GB drive along with my 500GB Western Digital My Book and my Drobo with three 1TB drives in it. Because of my initial performance problems with the Drobo it’s still pretty lite on files and used only for backups. This places little stress on the drive so performance hasn’t been serious a problem and I want the extra protection the Drobo offers. The Drobo has a 500GB partition that contains backup copy of my system disk that’s updated by SuperDuper every night. Another 500GB partition is used by Time Machine. I exclude large files like video and my entire iTunes library so Time Machine doesn’t use a lot of disk space. The WD My Book contains the virtual machines for Parallels. The VMs would fit on my internal drive or Drobo but I see better performance when they have their own drive.

My Windows Home Server has one 1.5TB drive and five 1TB drives. Two of the 1TB drives are in an external USB enclosure while the rest of the drives are internal. My attempt to add an eSata enclosure ended up being a disaster so I’m still using USB. The main use of the WHS is as a home for my Video files, currently taking up 2.7TB. I also throw my software application source files out there along with backups of my photos, music library and virtual machines. I don’t turn on file duplication anymore since everything is backed up.

Six of those 500GB drives and one of the 1TB drives are for backing up my video library and storing them offsite. I started off using older drives but had to start buying drives. I use ChronSync to copy the files to the drives and end up storing them outside my house as an offsite backup. Since the video files don’t change once they’re created this works fairly well. Despite the cost of the hard drives this is actually a pretty cost effective way to get offsite storage for terabytes of data. I save the ChronoSync scripts for each drive and every couple of months or so I’ll spend less than half an hour updating the older drives to catch what few changes there have been. This also allows all the drives to spin up and run for awhile which needs to be done to avoid problems that may occur if they sit unused for too long. While the drives seem expensive it’s a cost effective solution. Even if I had the bandwidth to use Amazon S3, storing 465GB (the effective size of a 500GB drive) on S3 would cost $70/mth which is the approximate cost of a 500GB drive these days. The alternative is no backup of the files and spend time recreating the videos from the source when a drive fails. This assumes the source is still good at that time.

The remaining drives? My iMac has an internal 160GB drive and an external 320GB drive serves as a media computer in the bedroom.

The 1TB drives are all Western Digital Caviar Green Drives (WD10EACS). Some customer reviews indicate some quality problems with these drives but I haven’t had any. Most of the problems are that they’re DOA or fail soon after delivery. I’m not sure the DOA rate is any worse than other drives. The drives run about 6°c cooler than the original drives. They’re also nice and quiet. I haven’t done any power measurements but the fact that they’re cooler indicates less energy used and also means the fans needs to run less. They’re in both my Windows Home Server and Drobo. The specs aren’t the fastest for drives in their price range but they perform fine for what I need. Mostly media streaming and backups.

So the mess on the desk is gone, but the hard drives keep multiplying.

Seagate 1.5 TB Hard Drive Added to Windows Home Server

I’d recently noticed the new Seagate 1.5TB hard drives at Newegg had competitive prices. The time came for me to buy a new hard drive for my offsite backups (I backup video files to hard drives and store them elsewhere. Despite the apparent cost it’s actually cost effective since HDD prices are way down). Since the cost per GB of the 1.5TB drive was comparable to smaller drives I decided to get one and use the drive it will replace as my backup drive.

I ran the remove disk wizard for the 1TB drive in the top bay of my HP MediaSmart EX475 Windows Home Server as I’ve done before. Since the disk was 97% full and the files were being copied to a disk connected via USB the wizard was relatively slow and took about 10 hours to move all the files and free up the disk.

Then it was a simple matter of powering off the server to be safe, and replacing the drive. The Seagate 1.5TB powered up just fine. It wasn’t noticeably loader than the Western Digital Green Drive it replaced. The temperature is typically at 38°c compared to the Western Digital green drives that run around 34°c. The drives that HP delivered with the server (and I no longer use) ran as hot as 45°c.

I can’t really speak to performance, it’s one of 4 internal and 2 external drives. All I can say is there hasn’t been any noticeable change in performance.

Due to differences between manufacturer and OS math, along with a little overhead, there was 1,397GB of space after WHS formatted the drive which is 50% more than the 1TB drives.

I’m a little hesitant to go with the latest hard drive capacities before they’ve had a chance to prove themselves. But the Seagate drives have a 5-year warranty so I decided to give them a try. It’s been less than two weeks, but no complaints so far.

Moving WordPress & Mint to a New Host

As part of the move to a new hosting provider I had to move my WordPress install to the new host. This proved easier than I expected and was problem free. My site was hosted on a shared server with Bluehost and I moved to a Virtual Private Server (VPS) at Slicehost. My site is registered with a 3rd party registrar but the DNS was managed with Bluehost. If you’re moving to a shared host or your current host is also your registrar you may have to handle DNS differently than I did, but the rest of the procedures should work just fine.

Prepare the New Host

My old site was already on the latest version of WordPress with updated plug-ins so to prepare the new site I just:

  1. Configured Apache to serve osquest.com and create the basic directory structure.
  2. Copy the WordPress install files to the root directory of my new site (they were in the root of my old site – the root of the site, not the root of the server)
  3. Copy the wp-content and Mint directories (and all their subdirectories to my new site). This includes the templates and plug-ins along with the images I’ve uploaded.
  4. The DNS entry still points to the old host so I edit the hosts file on the server so it points the domain to its own ip address. (See the Editing Hosts File section below for more information.) This may done differently depending on your host. Apache probably would have looked to itself for calls to the domain once I was in the website but I wanted to be sure it worked for all software and plug-ins.
  5. Change the hosts file on my local computer so it goes to the new host’s ip address for the domain. Since I run virtual machines on my Mac I made the change on one VM so I could access my new site through it, while still getting to the old site from the Mac itself.
  6. Install WordPress to create the basic structure. Instructions may vary by host, I used the standard WordPress install instructions. It isn’t necessary to duplicate the database name or user ID used on the old site. I used different different database and database user names without a problem. I went through the basic settings and made them match my old blog (this may have been unnecessary since I’d be restoring the DB but I wasn’t sure).

Backup the Old Site’s Database

I used the WordPress Database Backup plug-in to do the backup. After the database is backed up any changes made to the old site will be lost. This includes:

  • New Comments (unless you use a third-party commenting service)
  • New posts (easy enough to prevent)
  • Modified posts (also easy to prevent)
  • I use Mint for website statistics. Since these stats are kept in my WordPress database I’ll lose any data collected between the new backup and when the new site is active. While I could minimize this by doing this backup and restore later in the process but I decide to take the lazy easy route and do it with the rest of the WP database.

I’d had problems using older versions of the WP Database Backup Plug-in in the past so I haven’t been using it. The main problem is that my host would complain about CPU usage and cut me off. But this was a much newer version than I used in the past. Plus I figured I could backup a few tables at a time if I needed to. I wanted to avoid using the export command to get just the posts and comments.

PHP does have a file size limit on restoring the SQL data that I’ll hit in the restore section. Because the new host is totally under my control I’ll be changing this limit. If you’re moving to a shared host you may not be able to change this limit which was 2MB for me on Bluehost. If this is the case you may need to break up the tables into multiple backups.

Exclude Spam and post revisions from the backup. I don’t want them and there’s no need for me to take them with me. I didn’t take all the tables associated with plug-ins. Some were associated with plug-ins I no longer use. In the case of the search tables I still use the plug-in but I didn’t care about the search stats and it was easy to rebuild the index after the move. I included the Mint tables but not the mint backup tables. (Click the pics for full size)

WP Backup Core tables screen shot WP Backup Optional tables

When I clicked the Backup Now button I saved a 8MB file (compressed) to my hard drive. I extracted the file to my hard drive so I’d know the actual backup size, which was 12MB.

Restoring to the New Host

For this part I use my virtual machine that has the edited hosts file to connect to my new host when I use my domain name. I’ll be using phpMyAdmin to do the DB restore.

First I configure PHP to handle my 12MB file. Actually, since I’m the only server user I just configure it to accept a 25MB file. This is set in the php.ini file. To locate the php.ini file on your server you can create a file called phpinfo.php (or anything you want as long as the extension is .php) with the following contents:

<?php
phpinfo();
?>

Copy this file to your web server and load it in the browser. Among the data reported will be the location of php.ini. Delete the phpinfo.php file from the server when you’re done so info about your server can’t be easily found.

I open the php.ini file on my new server and change it to use the following values:

post_max_size = 25

upload_max_filesize = 25

I also change max_execution_time = 120 as a safety measure but I return it to the default of 30 when the import is done.

After making these changes I reload Apache so they take effect.

I then load phpMyAdmin in my browser, select my WordPress database, and select the import tab.

phpMyAdmin import screen

I browse to the backup file that I previously downloaded to my PC as the file to import. I leave the rest as-is and click “Go”. The import is done in about a minute.

At this point my WordPress IDs and passwords match my old blog, not what I used during the new install. I logon to my WordPress admin panel and check things out. My plug-ins are all active and seem to be working just fine. All my posts and comments seem to be there as the total counts match the old site.

I had changed some of the privacy and discussion settings during my site testing (to not ping other sites or search engines). These weren’t overwritten by the restore so I switched them back. Everything else seems fine with no additional work or tweaking needed so I move on to updating my DNS to make the new site active. (I still haven’t finished the Mint move but I’m going to do that later since I’m not sure what will happen when it wants to validate my license.)

Changing DNS

I go to my domain registrar and point the domain to the Slicehost name servers, replacing the Bluehost name servers. I also set up the DNS records on the Slicehost side. How this is done will vary by your registrar and hosting provider. Slicehost provides excellent DNS administration documentation. I don’t delete the DNS records from Bluehost at this time. Even after the the DNS name server change replicates I’ll still be able to access the old Bluehost site using a entry in my local hosts file.

It will take several hours for the name server change to replicate (up to two days is given as the typical delay). Even if I see the change on my end there may be other people out there who still go to the old Bluehost site for those couple of days. They’ll still see a site since it’s still active on the Bluehost server.

Telling Mint It’s Moved

I make this change right after updating the DNS name server record so the change hadn’t replicated anywhere yet. So it doesn’t appear the license validation has a problem and I could have done it earlier. To be safe I probably should have waited until the name server update had replicated, but I didn’t and it worked.

I’d already moved the Mint files themselves so all I needed to do was edit the db.php file to connect to the new database since I’ve change the DB name and the DB user name. After that I visited mint/?moved (with my browser) to let Mint know that it was moved. After this everything worked fine and it started collecting statistics when I tested. Because DNS hadn’t replicated yet it was several hours before I started getting any live statistics.

The Mint forum has a sticky posting on how to move Mint to a new host.

I’m Done

At this point the move is complete. I just need to wait for the DNS name server change to replicate. I can use my VM for testing until then because the hosts file directs me to the new host. It took about 6 hours for me to see the change and I saw visitors coming into the old site for about another six hours.

Once the DNS name server change replicated I edited the hosts file in my VM to point to my old Bluehost site so I could access it if needed. So far I haven’t deleted any files or modified any configurations for the Bluehost site.

The last section just provides more details on using the hosts file to pick which host I want to connect to without regard to the DNS settings.

Editing Host Files – Faking DNS

I was able to use hosts files on my PC (actually a virtual machine) and my web server to direct my domain to a different server than the “real” DNS entry would send it to. This allowed me to access the new site for testing before changing the DNS name servers themselves and still connect to my old site after the DNS changes propagated. This is easy enough to do. On Windows the hosts file is in [Windows]System32Driversetc. On my server it was in /etc. Both are text files and use the same format and in both cases (I’m using Vista in my VM) an administrator needs to edit it. I got lazy and changed permissions in Vista so I could edit it with my regular ID.

The Windows hosts files contains some example entries that can be used as example. For my website I added these entries to both my PC and the new server’s host files:

67.207.132.52 osquest.com

67.207.132.52 www.osquest.com

I added entries for both the root domain and the www sub-domain.

If your moving to a shared host you’ll have to check with your hosting provider to see if you can modify DNS before the name servers are updated. I didn’t modify (delete) my domain settings for Bluehost after I moved the site. I was still able to update the access the site with the host file pointing to it. But, if I was setting up a new site on Bluehost I would have had to go through a verification process but it does not appear they require that the DNS servers be updated first.

Other hosting providers may just require you wait until the DNS servers are updated. If this is the case then you’ll have some down time between the DNS change and when you get the new site up.

Summing Up

All-in-all it was a painless move since the backup & restore worked just fine. Key points are to make sure all the software is up to date and to maintain the same directory structure (if at all possible). The other potential problem is the size of the backup file that you’ll be importing so be sure to check for this limit on you new host before you begin. You can pull up the phpMyAdmin or WordPress import screens which will both give you the maximum file size.