Site Refresh

Our site hasn't changed very much over the last 4 years, but the business has changed a lot. The biggest change was the (uneventful and long overdue) upgrade to Drupal 6 a few months ago.

During the last week or so the site has been updated and refocused. The major changes include:

This also signals our return to regular blogging. There are a few posts in the pipeline. There should be a good mix of drupal and sys admin posts in the coming weeks.

As always, feedback is welcome.

eBook Review: Theming Drupal: A First Timer’s Guide

My experience themeing Drupal, like most of my coding skills, have been developed by digging up useful resources on line and some trail and error. I have an interest in graphic design, but never really studied it. I can turn out sites which look good, but my "designs" don't have the polish of a professionally designed site. I own quite a few (dead tree) books on development and project management. Generally I like to read when I am sick of sitting in front of a screen. The only ebooks I consider reading are short ones.

Emma Jane Hogbin offered her Drupal theming ebook Theming Drupal: A First Timer’s Guide to her mailing list subscribers for free. I am not a big fan of vendor mailing lists, most of the time I scan the messages and hit delete before the bottom. In the case of Emma, rumour has it that it is really worthwhile to subscribe to her list - especially if you are a designer interested in themeing Drupal. Emma also offered free copies of her ebook to those who begged, so I subscribed and I begged.

The first thing I noticed about the book was the ducks on the front cover, I'm a sucker for cute animal pics. The ebook is derived from Emma's training courses and the book she coauthored with Konstantin Kaefer, Front End Drupal. Readers are assumed to have some experience with HTML, CSS and PHP. The book is pitched at designers and programmers who want to get into building themes for Drupal.

The reader is walked through building a complete Drupal theme. The writing is detailed and includes loads of references for obtaining additional information. It covers building a page theme, content type specific themeing and the various base themes available for Druapl. The book is a very useful resource for anyone working on a Drupal theme.

Although I have themed quite a few Drupal sites, Emma's guide taught me a few things. The book is a good read for anyone who wants to improve their knowledge of Drupal themeing. Now to finish reading Front End Drupal ...

Flight Report MEL > SYD > SFO on United

To get to DrupalCon, I flew with United on UA840, then onto UA870 on Wednesday. I went with them for 2 reasons, they were cheap and I would earn miles on Thai. I was a little disappointed that my budget didn't stretch to Air New Zealand, I was looking forward to flying with them again after an excellent experience in January. I was really impressed with United.

At check-in I used the business counter, one of the benefits of Gold status. The agent was really friendly and answered my questions about security requirements when flying to the US. When I asked about being moved up to Economy Plus using my Thai Gold status the agent checked and gave me an aisle seat. I was looking forward to the extra 5 inches of leg room.

Next was off to the Air NZ lounge for some pancakes for breakfast - I love that machine. The Air NZ staff were friendly as always.

Boarding was delayed by 10 minutes or so, but staff kept people updated. Take off was really delayed, with not much explanation. The snack was ok, pretzels and a juice, don't expect much more on such a short flight.

In Syndey I had a light lunch in the Air NZ lounge, then off to boarding for San Francisco. There was a queue for economy boarding, while the premium queue was empty. One of the benefits of Star Alliance Gold status is supposed to be priority boarding, but it seems United only offers this to their own elites.

The inflight entertainment on United's 747s is awful, they only offer a shared screen and the radio options are really limited. Good thing I packed some book and my laptop. Dinner was pretty bad, beans, peas, corn and stale mushrooms with some tasteless sauce that was supposed to a curry and rice, the baked beans for breakfast was ok. Through trial and error I have learnt that AVML (Asian/Indian Vegetarian) meals are usually the best vego option, but I will be changing my selection for my return flight.

What really impressed me was the staff, like Air NZ's flight attendants, they seem like real people. The flight attendants engage with the passengers and treat them like real people. My ability to open the economy red wine bottles became a bit of a running joke with one flight attendant. After chatting with staff about Napa valley reds, my glasses of wine started to come from the front of the plane, which was very nice. Tom, the economy purser was happy to have a chat with me about United, which inspired me to write this post. When my laptop battery died the flight attendants let me charge it in business class, unfortunately I checked my AU > US power adaptor so I couldn't do it. I didn't really sleep on the flight, but I enjoyed just about every minute of it, thanks to the good service. The spare seat next to me also gave me some extra room to spread out which is always handy in Economy.

United's planes might be old, the entertainment just as dated, but the staff make up for that. I hope the people who went out of their way to make my flight as enjoyable and comfortable as possible for me get the recognition they deserve, even if they did bend the rules a little.

First Impressions Motorola Dext and Drupal Editor for Android

Today I purchased a Motorola Dext (aka Cliq) from Optus. Overall I like it. It feels more polished than the Nokia N97 which I bought last year. The range of apps is good. Even though the phone only ships with Android 1.6, 2.1 for the Dext is due in Q3 2010.

The apps seem to run nice and fast. The responsive touch screen is bright and clear. I am yet to try to make a call on it from home, but the 3G data seems as fast as my Telstra 3G service, so the signal should be ok.

The keyboard is very functional, albeit cramped with my fat thumbs. The home screen is a little cluttered for my liking too, but it won't take much to clean that up. I will miss my funambol sync, which is only available for Android 2.x

I started writing this post using the Drupal Editor for Android app, which is pretty nice. The GPL app uses the XML-RPC and Drupal core's Blog API module. Overall it feels like a stripped down version of Bilbo/Blogilo. Drupal Editor is an example of an app which does one thing and does it simply but well. The only thing I haven't liked about it was when originally writing this post. I bumped the save button and published an incomplete and poorly written post. Next time I will untick the publish checkbox until I am ready to really publish it.

I would still like a HTC Desire, but Telstra is only offering them on a $65 plan with no value. The Nokia N900 was off my list, due to the USB port of death and Nokia's spam policies. The Nexus One was on the list too, but a local warranty was a consideration.

ACMA Investigates Nokia for SMS Spam

The ongoing saga of Nokia's txt spam continues.

The bad news is that I received another txt from Nokia today. This comes after being told by Nokia that I would no longer receive any txts from them. The message reads:

Tip: Use less battery power and help conserve energy with a few helpful tips from Nokia. Vist to learn more.

I have an energy saving tip for Nokia, stop sending txt messages I don't want, then I won't waste energy on trying to make them stop.

Now for the good news. A little while ago ACMA told me that they were preparing to launch a formal investigation into Nokia's SMS messages under the Spam Act. It's now official. Late last Friday I received the following email from the ACMA Investigator handling the matter:

Dear Mr Hall

I write with reference to your complaint #XXX concerning allegations of breaches of the Spam Act 2003 (Spam Act). The Australian Communications and Media Authority (ACMA) has now commenced an investigation into Nokia Australia Pty Ltd about potential contraventions of the Spam Act. During the course of this investigation, the ACMA may require you to provide more information about any dealings you have had with Nokia Australia Pty Ltd and potentially complete a witness statement. The Anti-Spam Team will contact you in due course if this is required. The Anti-Spam Team does not provide your personal information to the business apart from the electronic account information (mobile telephone number) you have already provided. As I am sure you can appreciate, the ACMA is not able to disclose details of the investigation with you, but will advise you when an outcome has been determined. On behalf of the ACMA, I appreciate your assistance in this investigation and thank you for your cooperation.

Information about the Spam Act is available on our website at Please contact us if you have any queries.

I wonder how much energy Nokia will put into defending itself to ACMA.

Watch this space for more news.

X Mail Headers on

A while ago I submitted a patch for statusnet, the code that powers and other microblogging platforms. The patch added X headers to email notification messages, to make it easier to filter messages. The headers look something like this:

X-StatusNet-MessageType: subscribe
X-StatusNet-TargetUser: skwashd
X-StatusNet-SourceUser: username

The patch was included in the 0.9.1 release of statusnet, and is now running on I think this is the highest traffic site running any of my code. I am pretty excited about it.

This is one of the reasons why I love free software cloud services. Instead of just being a passive consumer of the service, you can actively contribute to its development.

You can run it too by downloading statusnet 0.9.1. Enjoy.

Now to setup some mail filters in Zimbra ...

Tricks to Running HAProxy on pfSense Embedded

HAProxy is available as an addon module for pfSense 1.2.3. This makes it really easy to have pfSense control the gateway and load balancing. There are a couple of tricks to getting it all up and running.

Although everything looked good in the webgui HAProxy just wouldn't start. After logging in it seemed that there were 2 problems, firstly as mentioned in the forums the IP addresses must be an interface or CARP addresses not Virtual IPs for HAProxy to work and secondly the file descriptor limits have to be increased. To increase the file descriptor limits run the following commands from a shell on pfSense.

mount -o rw /dev/ufs/pfsense1  /
echo >> /etc/sysctl.conf
echo '# File descriptor limits for HAProxy' >> /etc/sysctl.conf
kern.maxfiles=2000011 >> /etc/sysctl.conf
kern.maxfilesperproc=2000011 >> /etc/sysctl.conf
sysctl kern.maxfiles=2000011
sysctl kern.maxfilesperproc=2000011
mount -o ro /dev/ufs/pfsense1  /

The mount commands are only needed if running on embedded pfSense to make the CF card writeable while we make the changes then make it read only again once we are done. The echo commands add the new limits to /etc/sysctl.conf so the settings persist and the sysctl commands make them apply now.

I haven't tested to see if the file descriptor issue effects the non embedded version of pfSense, feel free to let me (and others know) via the comments.

Making it Easier to Spawn php-cgi on Debian and Ubuntu

Apache is a great web server, but sometimes I need something a bit more lightweight. I already have a bunch of sites using lighttpd, but I'm planning on switching them to nginx. Both nginx and lighttpd use FastCGI for running php. Getting FastCGI up and running on Ubuntu (or Debian) involves a bit of manual work which can slow down deployment.

The normal process to get nginx and php-cgi up and running is to install the spawn-fcgi package, create a shell script such as /usr/local/bin/php-fastcgi to launch it, then a custom init script, after making both of these executable you need to run update-rc.d then finally should be right to go. Each of these manual steps increases the likelihood of mistakes being made.

Instead, I created a deb contains a configurable init script. It is pretty simple, the init script calls spawn-fcgi with the appropriate arguments. All of the configuration is handled in /etc/default/php-fastcgi. The main config options are:

  • ENABLED - Enable (or disable) the init script. default: 0 (disabled)
  • ADDRESS - The IP address to bind to. default:
  • PORT - The TCP Port to bind to. default: 9000
  • USER - The user the php scripts will be excuted as. default: www-data
  • GROUP - The group the php scripts will be executed as. default: www-data
  • PHP_CGI - The full path to the php-cgi binary. default: /usr/bin/php5-cgi

The last 3 variables are not in the defaults file as I didn't think many users would want/need to change them, feel free to add them in if you need them.

Once you have set ENABLED to 1, launch the init scipt by executing sudo /etc/init.d/php-fastcgi start. To check that it is running run sudo netstat -nplt | grep 9000 and you should see /usr/bin/php5-cgi listed. Now you can continue to configure your webserver.

The package depends on php5-cgi and spawn-fcgi, which is available in Debian testing/squeeze, unstable/sid, along with Ubuntu karmic and lucid. For earlier versions of ubuntu you can change the dependency in debian/control from spawn-fcgi to lighttpd and disable lighttpd once it is installed so you can get spawn-fcgi . I haven't tested this approach and wouldn't recommend it.

You can grab the">binary package and install it using dpkg or build it yourself from the source tarball.

For more information on setting up nginx using php-cgi I recommend the linode howto - just skip the "Configure spawn-fcgi" step :)

Solr Replication, Load Balancing, haproxy and Drupal

I use Apache Solr for search on several projects, including a few using Drupal. Solr has built in support for replication and load balancing, unfortunately the load balancing is done on the client side and works best when using a persistent connection, which doesn't make a lot of sense for php based webapps. In the case of Drupal, there has been a long discussion on a patch in the issue queue to enable Solr's native load balancing, but things seem to have stalled.

In one instance I have Solr replicating from the master to a slave, with the plan to add additional slaves if the load justifies it. In order to get Drupal to write to the master and read from either node I needed a proxy or load balancer. In my case the best lightweight http load balancer that would easily run on the web heads was haproxy. I could have run varnish in front of solr and had it do the load balancing but that seemed like overkill at this stage.

Now when an update request hits haproxy it directs it to the master, but for reads it balances the requests between the 2 nodes. To get this setup running on ubuntu 9.10 with haproxy 1.3.18, I used the following /etc/haproxy/haproxy.cfg on each of the web heads:

    log   local0
    log   local1 notice
    maxconn 4096
    nbproc 4
    user haproxy
    group haproxy

    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    maxconn 2000
    balance roundrobin
    stats enable
    stats uri /haproxy?stats

frontend solr_lb
    bind localhost:8080
    acl master_methods method POST DELETE PUT
    use_backend master_backend if master_methods
    default_backend read_backends

backend master_backend
    server solr-a weight 1 maxconn 512 check

backend slave_backend
    server solr-b weight 1 maxconn 512 check

backend read_backends
    server solr-a weight 1 maxconn 512 check
    server solr-b weight 1 maxconn 512 check

To ensure the configuration is working properly run

wget http://localhost:8080/solr -O -
on each of the web heads. If you get a connection refused message haproxy may not be running. If you get a 503 error make sure solr/jetty/tomcat is running on the solr nodes. If you get some html output which mentions Solr, then it should be working properly.

For Drupal's apachesolr module to use this configuration, simply set the hostname to localhost and the port to 8080 in the module configuration page. Rebuild your search index and you should be right to go.

If you had a lot of index updates then you could consider making the master write only and having 2 read only slaves, just change the IP addresses to point to the right hosts.

For more information on Solr replication refer to the Solr wiki, for more information on configuring haproxy refer to the manual. Thanks to Joe William and his blog post on load balancing couchdb using haproxy which helped me get the configuration I needed after I decided what I wanted.

Check Drupal Module Status Using Bash

When you run a lot of drupal sites it can be annoying to keep track of all of the modules contained in a platform and ensure all of them are up to date. One option is to setup a dummy site setup with all the modules installed and email notifications enabled, this is OK, but then you need to make sure you enable the additional modules every time you add something to your platform.

I wanted to be able to check the status of all of the modules in a given platform using the command line. I started scratching the itch by writing a simple shell script to use the drupal updates server to check for the status of all the modules. I kept on polishing it until I was happy with it, there are some bits of which are a little bit ugly, but that is mostly due to the limitations of bash. If I had to rewrite the it I would do it in PHP or some other language which understands arrays/lists and has http client and xml libraries.

The script supports excluding modules by using a extended grep regular expression pattern and nominating a major version of drupal. When there is a version mismatch it will be shown in bolded red, while modules where the versions match will be shown in green. The script filters out all dev and alpha releases, after all the script is designed for checking production sites. Adding support for per module update servers should be pretty easy to do, but I don't have modules to test this with.

To use the script, download it, save it somewhere handy, such as

, make it executable (run
chmod +x ~/bin/
). Now it is ready for you to run it -
~/bin/ /path/to/drupal
and wait for the output.