Caching WordPress Pages with mod_expires

ApacheServerLogo The final step in my WordPress/Apache optimization was to look at mod_expires. The Apache mode_expires module is used to tell a browser how long it can cache the page. With expiration enabled a browser will refresh a page from it’s local cache rather than the server, at least until the page expires.

On my site the pages a relatively static, they may change when a comment is added but that’s about it. The pictures and graphics will almost never change. So I’ll give regular pages a fairly short cache time (5 minutes) and a much longer time for the graphics (30 days). My style sheets also change infrequently so I’ll make those expire after a day.

Unlike other settings like mod_deflate it’s very likely that I’ll want different settings for different sites. So I’ll be setting this in the site file, rather than server-wide like I did for mod_deflate.

I open the site file to edit: sudo nano /etc/apache2/sites-available/sitefile.com

I added the following lines in the <Directory> section of the configuration to apply the settings to my main directory and all sub-directories. I only cache my regular port 80 files, if I access the admin panel using SSL they won’t be cached.

<IfModule mod_expires.c>

ExpiresActive On

ExpiresDefault "access plus 5 minutes"

ExpiresByType image/gif A2592000

ExpiresByType image/jpg A2592000

ExpiresByType image/png A2592000

ExpiresByType text/css A86400

</IfModule>

 

The ExpiresActive turns on the Expires headers. the next line sets a default expiration time for pages, as the name implies. In this case I use the syntax which is more readable. Any page that doesn’t have a expiration specifically defined will expire five minutes after the browser first loads (accesses) it. The next four lines define the expiration times for four specific file types, 3 graphic file types along with css files. These directives use the syntax where A means “accessed” and is followed by the number of seconds to keep the document in cache.

If the expires module isn’t already active you’ll need to enable it with sudo a2enmod expires. Then reload the Apache configuration to enable the new settings.

I looked at my site access logs when loading, and then reloading a page to see the “304” response codes logged on the reloads. I also used the Firefox add-on Live HTTP Headers to look at what the server was sending down. I could see the proper cache settings so I knew all was well.

There’s no guarantee the browser will use the cache settings, but if it does then it could save some bandwidth, time and server resources by loading the file or graphic from the local cache on page reloads.

Apache mod_deflate with WordPress

ApacheServerLogo To continue along with my Apache experimentation I decided to enable Apache mod_deflate on my server. All I run is WordPress and I probably won’t gain much of an increase over enabling compression in WP Super Cache. But at least this way I won’t be limited to the plugin and WordPress. I’m running Apache 2 on Ubuntu 8.10 Server and the configuration was a breeze.

Why use compression”? To save bandwidth for myself and visitors, and with less download time it means faster performance.

The /etc/apache2/mods-enabled/deflate.conf file contains the line:

AddOutputFilterByType DEFLATE text/html text/plain text/xml

This compresses text file types but leaves the other alone. I’ll probably add /text/css to compress css files, but for now I’ll keep it simple. This will cover almost everything my site serves except for graphics, which are already compressed.

Older browsers may have a problem with compression even though browsers are supposed to let the server know whether or not they can handle compression. I do have a small number of visitors (about 1/2 of 1%) that report using one of these older browsers so I’ll add an exclusion for them to the configuration. I considered ignoring the issue but since the official documentation had the syntax I used it.

Open the deflate.conf: sudo nano /etc/apache2/mods-available/deflate.conf

Add the three “BrowserMatch” lines to the file so it looks as follows and save it.

<IfModule mod_deflate.c>

          AddOutputFilterByType DEFLATE text/html text/plain text/xml

          BrowserMatch ^Mozilla/4 gzip-only-text/html

          BrowserMatch ^Mozilla/4.0[678] no-gzip

          BrowserMatch bMSIE !no-gzip !gzip-only-text/html

</IfModule>

The only thing left to do was enable mod_deflate by running sudo a2enmod deflate. I can then test the compression at IsMyBlogWorking.com. Compression using mod_deflate was comparable to the results I saw with compression enabled in WP Super Cache (about 72%). It’s working fine with WordPress and compressing the pages being served.

More information about mod_deflate can be found at G-Loaded.

Setting Up SPF and Sender ID in Google Apps

I use Google App for Your Domain for my email, both my personal email and as email for the websites I run. I decided it was finally time to set up Sender Policy Framework (SPF) records and Sender ID. For differences between SPF and Sender ID you can read this. While they aren’t the same, the syntax and similarities make the steps for setting up each identical for our purposes.

What is SPF? From the OpenSPF website:

Even more precisely, SPFv1 allows the owner of a domain to specify their mail sending policy, e.g. which mail servers they use to send mail from their domain. The technology requires two sides to play together: (1) the domain owner publishes this information in an SPF record in the domain’s DNS zone, and when someone else’s mail server receives a message claiming to come from that domain, then (2) the receiving server can check whether the message complies with the domain’s stated policy. If, e.g., the message comes from an unknown server, it can be considered a fake.

What is Sender ID? From Microsoft’s Sender ID page:

The Sender ID Framework is an e-mail authentication technology protocol that helps address the problem of spoofing and phishing by verifying the domain name from which e-mail messages are sent

It’s important to note that while I have my own domains none of my servers send email, everything I send is from an email client. I don’t need to configure any other servers, just Google’s. So I can use Google’s instructions as the starting point for setting up the records. The important piece is: v=spf1 include:aspmx.googlemail.com ~all.

Google recommends using ~all which indicates a “soft fail” if the sender doesn’t match the record. This means the receiving service should apply extra scrutiny but not reject the email immediately. It’s up to the receiving service what the extra scrutiny is and some of my reading indicated some services (like Hotmail) are prone to reject soft fails. The most logical reason I read was that is someone isn’t confident enough in their settings to use a hard fail then the receiving service isn’t likely to trust anything other than a pass result. So I’ll be configuring a hard fail which is –all. (hard fail is a dash, soft fail is a tilde) I did use the soft fail during testing and you may want to do the same.

The Sender ID record is the same except for the policy statement at the beginning.

[Update July 14, 2012: As Terry pointed out in a comment, Google’s SPF record has changed to ” v=spf1 include:_spf.google.com -all”.]

My SPF record will be:

v=spf1 include:aspmx.googlemail.com -all

While my Sender ID record will be:

spf2.0/pra include:aspmx.googlemail.com -all

[Update July 14, 2012: It seems Sender ID is rarely used, mainly by Microsoft. The record listed here will be redirected but work, despite being technically wrong. See this.]

All that’s left is to add the records for the domain. The method varies by registrar. The SPF and Sender ID records get added as TXT records. Most of the domains I have in GAFYD use Slicehost DNS and they already have a good write-up on how to setup SPF records at Slicehost. I’ve added the procedures for some other registrars that I have access to.

After the SPF and Sender ID records have been added and allowed time to propagate you can use one of the testing tools to validate the records. I used the tester supplied by Port25 and sent an email to check-auth [at] verifier.port25.com. A response is returned with the results of the tests.

These procedures assume GAFYD is already configured to send and deliver mail for you. Google provides good documentation on how to do this and I wrote up how I setup Google App for My Domain back in August of 2007.

Adding SPF and Sender ID at GoDaddy

  1. Fire up Domain Manager and go to “Total DNS Control” for your domain.
  2. Click the “Add New SPF Record button under the TXT section.
  3. Select “an ISP or other mail provider” and click OK
  4. Click the Outsourced tab
  5. Type aspmx.googlemail.com into the text box for domains. Click the “Exclude all hosts not specified here” for a hard fail (-all). Click OK
  6. You’ll be asked to confirm the record that was generated. It should look like the SPF record I have above. Click OK to save the record.
  7. Now click the “Add New TXT Record” button to begin adding the Sender ID record.
  8. Type “@” (no quotes) into the TXT Name file
  9. Type (or paste) the Sender ID record into the “TXT Value” field.
  10. Change the TTL if you want, keep the value low for testing, you can change it from the default 1hr if you want. Click “OK” to save the record.
  11. Wait for the change to propagate. I my case I could test after a few minutes, but in some cases it can take awhile.

Adding SPF and Sender ID at Bluehost

Bluehost automatically adds SPF records that point to their servers but use the ?all mechanism. From Bluehost help:

We do allow customers to request custom TXT entries in order to help fight against spam.

So it appears you’ll have to open a support ticket and have them add the records. (I did not do this so I can’t confirm they’ll do it or if it works properly.)

Adding SPF and Sender ID at NameCheap and NameCheap FreeDNS

I believe these procedures should work but don’t have an email account that I can test with. FreeDNS is a service provided by NameCheap that allows you to manage DNS for domains registered elsewhere.

  1. Go the “Manage Domains” and either select “Your Domains” or “FreeDNS –> Hosted Domains” depending on which service you use. Then click on the Domain Name in the list. If the Domain is registered at NameCheap you’ll need to select “All Host Records” from the left menu bar. For FreeDNS you already see the All Host Records screen. From this point on the process is the same.
  2. Enter the information as shown below. The record is partially obscured due to its length, but it’s the same SPF and Send ID records we’ve been using.

NameCheapSPF

Once you save the settings you’re done.

Adding SPF and Sender ID at Enom

I believe these procedures should work but don’t have an email account that I can test with.

Enom provides a “Add SRV or SPF Record” button button I found that using this only allows the addition of one TXT record for the @ host. I found that both records could be added by simply typing them on the main screen. Use “@” as the host name (no quotes).

EnomSPF

You’re done once you click Save.

SPF and Sender ID at 1 & 1

It doesn’t appear SPF or Sender ID can be used for domains registered at 1 & 1. The DNS configuration is very limited and I found the following in their FAQ under “What is an SPF record?”

There is currently no implementation of these
policies planned for 1&1 domains.

If you need SPF on a domain registered at 1 & 1 it appears you’ll either need to transfer it or use a third party DNS service.

SPF and Sender ID at Moniker

I believe these procedures should work but don’t have an email account that I can test with.

  1. Log on and go to “My Domains”. Check the box next to the domain you want to manage and click the “IP” tab.
  2. Click on the domain name.
  3. Under “Add Zone Records” select TXT as the record type, enter @ as the host name and put in the spf or sender ID record for the address then click Add. Do this for both the Sender ID and SPF records.

Most hosts should use a process similar to one of the above.

I’d been holding off implementing SPF because I thought it would be a pain and cause problems. While looking into it I saw that Sender ID was easily implemented at the same time. In fact, because Sender ID will use the spf1 record is no spf2 record exists it’s recommended that Sender ID also be implemented at the same time (even if it’s only a record to say it’s not set up) because the spf1 record can cause problems with Sender ID. I previously linked to a detailed description of the differences which includes and explanation of why this is the same.

It’s also recommended that SPF records be added to domains that don’t send email. These records should indicate that the domain doesn’t send email in order to avoid it being spoofed by spammers.

SPF and Sender ID are complicated items but are easy to implement for someone like me who just uses GAFYD with desktop (or web) email clients.

MYSQL / WordPress Database Backup

mysqllogo While my server image gets backed up daily by Slicehost, and my web files get backed up daily by me, there’s a potential gap in my WordPress database backup strategy. I back it up whenever I think of it but since it’s a manual process it’s done less and less frequently. If my database gets corrupt my only option is to restore the last server image unless I happened to do a WordPress backup recently. While pretty quick, it’s also a bit drastic.

So it’s time to remedy the situation. The WordPress codex has instructions for various ways to backup a WordPress database, I decided to go with a variation of “Using Straight MySQL Commands”.  It’s the easiest and most reliable way for me to do the backup.

I’ll be setting up the backup so it runs daily via a cron job, then as part of my regularly scheduled file backup it gets copied down to my Mac so it’s off the server. Once it’s on my Mac the versions get managed by my backup software in the event I need to go back a day or two.

I’m running MySQL 5 on Ubuntu 8.10 Server, here’s how I set it up:

Enable Cron

I haven’t been running Cron for my user and I wanted the SQL backup to run under my user ID. So the first thing I had to do is enable user level cron for my user ID I’ll do this by creating a cron.allow file and add my ID.

sudo nano /etc/cron.allow

I type my ID on the first line and save the file.

Setup the MySQL Logon

Rather than specify the user name and password on the command line I’ll set up a configuration file to handle all the MySQL logons. While the user name and password will be in clear text I’ll secure the file so only my ID can access it.

sudo nano /home/demo/.my.cnf

In these examples “demo” is my home directory, substitute your directory. Note the leading ‘.’ in the file name. I add the following lines to the file. Substitute your ID/Password as appropriate. Note that this is the MySQL ID and password, not your Ubuntu ID.

[client]

user=root

password=idpassword

Save the file.

Create the Bash Script and Supporting directories

I’ll create a directory to hold all my backups and then a bash script to run the backup.

mkdir /home/demo/backup

Then I’ll lock down the security on the directory so only I can access or read it.

chmod 750 /home/demo/backup

Now I’ll create the bash script file.

Open the editor and create the file: nano /home/demo/mysqlbackup.sh

Type the following lines into the file (the second line wraps in this display but is typed all on one line)
#!/bin/sh

mysqldump --add-drop-table -h localhost --all-databases | bzip2 -c >/home/demo/backup/mysqlbackup.sql.bz2

chmod 750 /home/demo/backup/mysqlbackup.sql.bz2

The first line just says the file is a bash script.

The second line does all the real work. It runs the application mysqldump. The –add-drop-table parameter was recommended by WordPress. This adds a drop table command to the beginning of the backup. During the import (restore) it will drop the table name if it already exists so that you don’t have to delete it yourself..

I’m running everything on the server so the host parameter is localhost and –all-databases will dump all dbs to one file as the name implies.

Everything is piped to bzip2 so that the output file is compressed. The output file is specified after the >.

It’s worth noting that compressing the file is CPU intensive and the server will take a brief 100% hit during the backup. My backup takes a couple seconds and results in a 4mb file. It’s 27MB uncompressed and the cpu rarely tops 30% when doing a uncompressed backup. But I copy the file to my PC so the compression is well worth the 1 or 2 second hit.

The chmod line sets security on the backup file so that only I can see it. The file contains things like passwords and I’m paranoid so I take the additional step of verifying the security settings. The default profile for my ID is to create files that are readable by the world. Even if the directory is blocked to others it’s no real effort for me to take the additional step.

Test the script by running: /home/demo/mysqlbackup.sh

The backup should should run and create the file in the backup directory. Once it does all that’s left is to schedule to cron job.

Schedule the Cron Job

Run: crontab –e

This will open the crontab file in your default editor (nano in my case). You can find more info on creating the cron file in the Ubuntu documentation. In my case I added the following line to the crontab file:

0 5 * * * /home/demo/mysqlbackup.sh

This will run the backup at 5am every morning. (My server is set to UTC). When you save the file it will be checked for errors and if none are found you’re ready to go.

I’d previously written about how I schedule website backups with Transmit. While that article is almost two years old I do the same process today. After being copied to my Mac the file is then backed up by my regular backup software which handles keeping a few recent versions of the file around.

Now I can sleep better tonight knowing I have a backup of my MySQL databases.

Favorite WordPress Plugins: WP Super Cache

There’s a few WordPress plugins that I just can’t live without. I like (and need) them so much that I’ve contributed to the plugin author to encourage them to keep developing the plugin. There’s only a few of these and the first I’ll write about is WP Super Cache. As the name implies it’s a caching plugin for WordPress. The plugin setup may be a little more complicated than other plugins you’ve installed. This depends on the security settings of your webhost. The included readme, along with the website does an excellent job of explaining the installation process so I won’t repeat it here. I will mention that the administration page for WP Super Cache will report on problems it’s found and offer suggestions for resolution. For example, if it can’t write to the .htaccess file it will tell you so.

I still have an account with Bluehost so I went in and was enable to activate the plugin entirely through the WordPress admin panel, I didn’t need to change any file permissions or create directories myself. The .htaccess file could also be modified by the plugin. (of course, there are those of us who’d prefer the web server didn’t have the access necessary to do such things, but that’s another story.) So it was no more difficult that a regular plugin.

WP Super Cache is based on the WP-Cache plugin and has a “half on” mode where it duplicates the functionality of the legacy WP-Cache plugin. This may avoid conflicts with some plugins or other blog features.

One potential caveat is that if your WordPress site has a lot of dynamic data this won’t be updated very often as it will be served on the cached pages. I use it on this site. While I’ve used it with compression enabled and haven’t had any problems I’ve since switched to using mod_deflate and have disabled compression in the plugin. But using a tool to test compression reported compression in excess of 70% for pages on this site when I had it enabled.

There was also a noticeable improvement in performance when I browsed to cached pages. This site only gets about 325 page views a day so the server isn’t under a lot of stress but there was a noticeable drop in memory and cpu usage once I implemented caching. CPU usage seemed to increase a bit again when I turned on compression through the plugin but cpu usage has never been a problem with this site.

Cached pages aren’t served to logged on users or those who left comments so if your site is mostly registered users or commenters then caching may not be for you, but it’s a big help for the rest of us.

The OS Quest Trail Log #39: Long Lingering Mistake Edition

Today wasn’t a good day for site uptime stats. After about 2 months of continuous server uptime there were a couple planned and one unplanned outages. I decided to upgrade from Ubuntu 8.04 to Ubuntu 8.10. Yea, I know 9.04 was just released but I don’t want to be that bleeding edge on my server and 8.10 had some features that would make my life a bit easier. Besides, the path to 9.04 goes through 8.10 anyway.

The upgrade itself was relatively painless and the downtime was limited to about 10 minutes plus a second quick reboot later on. The real problem came later and was actually unrelated to the upgrade, although I spent a lot of time thinking it was which extended the outage.

Ubuntu 8.04 to 8.10 Upgrade

The Ubuntu upgrade was simple. I’m running Ubuntu 8.04 Server with is a Long Term Support (LTS) edition. So the steps were:

  1. Make sure update-core-manager is installed: sudo aptitude install update-core-manager
  2. Edit the update manager configuration to allow the upgrade to the LTS edition (typically LTS editions default to not upgrading since the point is a long term install with minimal changes). So I run sudo nano /etc/update-manager/release-upgrades and change the “prompt” setting to be Prompt=normal
  3. Then I run the upgrade: sudo do-release-upgrade

I follow the instructions as the install progresses. The upgrade completes and I reboot about 40 minutes after I issued the command. Apache and my sites were down for about 10 minutes of that time. I was asked a couple questions during the install and accepted the defaults. Basically I didn’t replace any configuration files with those included in the new installs.

Everything was fine after the reboot.

Because I run on a virtual private server (vps) the kernel wasn’t upgraded so that comes next.

Kernel Upgrade

This was the easy part. All I had to do was open a support ticket with Slicehost and ask them to upgrade the kernel. They got back to me within minutes letting me know the kernel version and that there would be a reboot. I confirmed it was OK and the work was done a few minutes later. It was done in less than 20 minutes after I submitted the request and the down time was about a minute for the reboot.

Everything was fine after the reboot

And Then The Problems…

I’ve been tweaking Apache to see what the performance impact will be. I made a change and restarted Apache, it didn’t start. Naturally I looked at the just changed settings and backed them off. Still won’t start. So then I start looking at something that might have changed due to the upgrade, despite everything working after the reboots.

Finally I checked the Apache log and noticed that there were errors that “rotatelogs” logs couldn’t be found. I use it to manage the logs and it’s worked in the past. And it’s still there. Then I noticed the leading “/” was missing from the path in the log files. Sure enough, it wasn’t in the configuration file either. Stick it in and all is well.

I’d rarely restarted Apache in the past, the reboots were the first time since the server started. I’d do reloads, but no restart. This time I used the “restart” command and got the error. I figure the context was OK on the reboots so the lack of an explicit path went unnoticed all this time.

A relatively easy fix but about 20 minutes of down time before I found it. Problems are supposed to be caused by the last thing changes, not long lingering mistakes just waiting to explode on the scene with the right trigger.

Apache Modules Needed For A WordPress Site

Continuing along my recent WordPress theme but veering into Apache Server territory I took a look at the Apache modules that are required for my WordPress site. I’m running WordPress 2.7.1 on Apache 2. It’s pretty basic, just WordPress and some plugins, not heavy on file downloads or streaming. I’m hoping to save some memory on my server so I’ll disable the Apache modules that I don’t need. This only applies if you control your own server, if not you can save some pain and move along now.

Modules Needed by WordPress and Apache

(If Apache failed to start without the module I considered it required rather than try to modify my config to remove it.)

  • alias
  • authz_host
  • dir
  • mime
  • php5 (may be different if you’re using php 4)
  • rewrite
  • setenvif

Modules Needed By WordPress Optional Features or Plugins

The WP Super Cache Plugin uses the following modules. If they aren’t available the plugin will still run but will be in “half mode” with limited features.

  • expires
  • headers

Since I have WordPress Administration over SSL along with mod_deflate enabled I also kept the following modules:

  • ssl
  • deflate

If you have other plugins there may be additional modules that you need. There’s the usual disclaimer that I’m sharing what works for me, your mileage may vary.

Windows Live Writer WordPress Theme Detection Error

Might as well continue the WordPress theme with a quick post. I use Windows Live Writer for most of my posts. I’ve been setting up a new site but when I went to add it to Windows Live Writer it gave me an error that it couldn’t detect the style. I could see the temporary post appear and disappear so I knew it had access. I was also able to post.

Looking in the Windows Live Writer Log (available through Help –> About) indicated it was timing out. With my recent changes, such as enabling SSL, I thought I introduced an error so spent some time researching that and checking plugins. Long story short, this is the first site I’ve set up that’s configured to use a static home page rather than posts. It seems Windows Live Writer doesn’t like this. I switched the main page to posts and the detection worked fine. I switched back to a static page after that and everything stayed fine.

One the same Windows Live Writer topic:

I use a child theme for most of my sites. I’ve found that Windows Live Writer can only read the style sheets in the child theme directory and not the ones in the parent theme directory. If you use child themes you’ll know it and the statement will make sense. If you’ve no idea if your theme is a child them then it’s probably not. I use Thematic as my theme framework.

No Longer A Mozy User

I’ve been a Mozy fan and user for a couple of years and had a paid subscription, at least until I recently cancelled it. At some point my backups stopped working. My tech support experience didn’t go well. While working on the issue I was backing things up to Amazon S3 (via Jungle Disk). After all the files were backed up to S3 and the Mozy issue remained unresolved so I cancelled the subscription.

I will admit I didn’t dedicate a great deal of time to the problem although I did run through all the steps requested by Mozy support. Backups need to be unobtrusive, simple and not time consuming. This had become neither.

I new things were bad when the tech support e-mail said the problem was my Mac was going to sleep during the backup and I should just keep restarting the backup. The logs I sent proved this wasn’t the case (my Mac is set to never sleep) and the error occurred as soon as Mozy attempted to send the first file to the server. I’d already done most of the steps in the e-mail (such as uninstalling/re-installing).

The final step was to change the backup to try backing up just one file. I saved this for last since if it worked it would remove all my other files (although the historical copies should remain for 30 days). This was a bit weird. It did delete about 80% of the files although 10GB or so remained up there. This indicated to me that something was out of sync. Also, the one file selected for backup wasn’t backed up and I received the error.

For laughs I installed Mozy one last time and used by free account. The backup worked fine.

I want a backup I can trust and I’d lost confidence in Mozy so the path of least resistance and greatest confidence was to move on. Amazon S3 is more expensive (the break even point with Mozy is around 30GB although since there are charges beyond space used the exact amount varies.)

So far no other backup has the features I want for the low cost of Mozy so for now I’ll be sticking with Amazon S3 and spending a bit more. In future posts I’ll write about Jungle Disk and the other backup solutions I looked at.  But since I’ve been so pro Mozy in the past I wanted to get on record that I now longer had a subscription.

WordPress Administration Over SSL

Since this is my third straight WordPress related post it’s probably obvious that I spent some time digging into WordPress this weekend. This feature (WordPress Administration over SSL) has been in WordPress awhile and was available via plugins for some time before that. Administration over SSL encrypts the traffic between the browser and the server so no one can look in on your traffic. In the case of WordPress this means no one can pluck your password off the network. Without SSL your password is in clear text and can be read by someone who’s able to intercept (“sniff”) the traffic.

WordPress can either encrypt just the login or can encrypt the entire admin session. SSL can be slow and put more strain on the server so you may not want to use it all the time. Of course, your web server must be set up to enable SSL. SSL does require a certificate on the server and these certificates can cost money. But if all you want to do is use SSL for yourself a self-signed certificate can be used. Self-signed certificates aren’t suitable for e-commerce or public sites but it’s enough for what I need. The browser will balk at the self-signed certificate but most modern browsers will all you to add the certificate to the trusted certificates list and silently connect in the future.

I use a virtual private server (VPS) so I control everything from the OS on up and won’t have any trouble using self-signed certificate. I can’t say what other hosts will allow, you may need to buy a certificate from them and you may need to request SSL be enabled for your domain.

Once SSL is enabled and the self-signed (or real) certificate is installed you can enable WordPress administration over SSL by adding one of the following two lines to your wpconfig.php file:

To use SSL on logon only use: define('FORCE_SSL_LOGIN', true);

For SSL on logon and the entire Admin session use: define('FORCE_SSL_ADMIN', true);

Be sure to add it before the require_once(ABSPATH . 'wp-settings.php'); statement. I hastily pasted it at the end of the file and SSL Admin didn’t work for WordPress. Let’s not mention how long it took me to find the problem.

The URL should switch to https:// when you access /wp-admin and your browser should indicate it has a secure connection (such as a padlock in the status bar).

I have SSL enabled for the full admin session. I didn’t do any official benchmarks but performance does seem a little slower at times. But that could be because I’m expecting it and paying more attention. CPU usage also seemed briefly higher when I was running an SSL section, but again, it’s been awhile since I paid attention. But neither the performance or cpu usage were unacceptable and wouldn’t have raised an alarm or been noticed if I wasn’t watching.

The WordPress codex provides details about SSL Administration.