debian

Fixing Zimbra's Broken debs

As much as I love Zimbra, I find their Debian packaging frustrating. Why do they insist on shipping half broken debs? I can excuse vmware for being too lazy to provide proper descriptions for their packages, although the generic "Best email money can buy" text seems a little lame. Failing to populate the "Provides" field is brain dead. This makes it possible to install mailx on a server running Zimbra without installing another MTA.

I've created a simple workaround deb which provides mail-transport-agent and depends on zimbra-mta. The deb also symlinks the zimbra sendmail binary to /usr/sbin/sendmail - where it belongs. Now mailx and other tools which depend on mail-transport-agent can be installed. The package should work with both Debian and Ubuntu.

The source available on github, or you can download a prebuilt platform independent deb from github's download manager. The package is released under the terms of the WTFPLv2.

I hope that Zimbra builds better debs and makes this package obsolete.

$100 Drupal Site Series: Part 4 - Platforms

So far in this series we have covered a potential target market and business plan, resources and infrastructure and the tools required to deliver Drupal sites with a sale price of $100 per site. In this post I'll be covering some of the considerations when building Drupal platforms or distributions.

The sites which customers deploy will need to be based on a custom Drupal distribution or "distro". The distro should be modular and primarily driven by Features.

Customers shouldn't have to know anything about administering Drupal when they first buy their site. A customer should be able to turn functionality on and off as they want, through a simple user interface.

Features

The platform should contain a good collection of Features. The following list is an example of what you might offer customers:

  • Contact Form
  • Image Gallery
  • Products
  • Services
  • "Static" Pages
  • Blog
  • News
  • Mailing Lists
  • Social Network Integration
  • Office / Store Locations
  • Staff Profiles

When developing your list of things to include in the site, think in terms of functionality a small business would want, not what modules you should be using. The list of modules should be derived from the functionality, not the other way around.

As the features included in the platform will be modular and generically useful, you should consider releasing them publicly, via your own features server or drupal.org as full modules.

On top of the features listed above you will probably need to include some custom glue code to enhance the user experience. In my first post in this series I discussed the target audience not having high level computer skills, so the user interface should take this into account. Some of the language might need to be changed or form options modified to use sane defaults and some might even be hidden from the user.

Security

As each server may have hundreds or even thousands of sites running on it, security will be an important consideration. Like with all servers you should ensure it is properly locked down and only running the services you need. Apache should be configured to block access to most things except index.php and relevant client side files (images, css, js) and the files directory. At the Drupal level you should make sure that things like the PHP module aren't enabled and secure coding practices are adhered to. The user account given to your customer shouldn't be user 1, they should be user 2, with restricted permissions that only gives them access to what they need.

I strongly recommend that you read Cracking Drupal by Greg Knaddison.

Sales and Support

In order to attract customers you will need a site to promote the service and allow customers to sign up and hand over their credit card details. Drupal now offers 3 ecommerce projects, Drupal e-Commerce, Drupal Commerce and ubercart, you should investigate which of these best suits your needs. The sales system will need some custom code to hook into Aegir, which will be managing the actual site deployments. The sales and support platform/s should be managed in a similar manner to the customer sites.

Once you have paying customers, you will also need to provide them with some resources such as detailed documentation, video walk throughs, forums and possibly a ticketing system. This site can either be part of the sales site or a separate site. In the next instalment I'll cover support in more detail.

Deploying Platforms

We need to keep the whole process very automated, CPU cycles are a lot cheaper than workers. Building and deploying platforms should involve a few clicks or tweaking a configuration file. For example platforms could be built as Debian (or Ubuntu) packages (aka debs) using an automated build process that runs a test suite against the code base before building a deb. The debs could then be deployed using puppet and a post installation script can notify Aegir that it has been installed successfully. The whole process could involve very little human interaction. Migrating client sites to upgraded platforms could also be automated using a simple script which adds a migrate task for each site.

What's Next?

Now that we have the service almost ready to go, we should look into how we are going to get customers to part with their cash and how we will support them once they have paid.

Making it Easier to Spawn php-cgi on Debian and Ubuntu

Apache is a great web server, but sometimes I need something a bit more lightweight. I already have a bunch of sites using lighttpd, but I'm planning on switching them to nginx. Both nginx and lighttpd use FastCGI for running php. Getting FastCGI up and running on Ubuntu (or Debian) involves a bit of manual work which can slow down deployment.

The normal process to get nginx and php-cgi up and running is to install the spawn-fcgi package, create a shell script such as /usr/local/bin/php-fastcgi to launch it, then a custom init script, after making both of these executable you need to run update-rc.d then finally should be right to go. Each of these manual steps increases the likelihood of mistakes being made.

Instead, I created a deb contains a configurable init script. It is pretty simple, the init script calls spawn-fcgi with the appropriate arguments. All of the configuration is handled in /etc/default/php-fastcgi. The main config options are:

  • ENABLED - Enable (or disable) the init script. default: 0 (disabled)
  • ADDRESS - The IP address to bind to. default: 127.0.0.1
  • PORT - The TCP Port to bind to. default: 9000
  • USER - The user the php scripts will be excuted as. default: www-data
  • GROUP - The group the php scripts will be executed as. default: www-data
  • PHP_CGI - The full path to the php-cgi binary. default: /usr/bin/php5-cgi

The last 3 variables are not in the defaults file as I didn't think many users would want/need to change them, feel free to add them in if you need them.

Once you have set ENABLED to 1, launch the init scipt by executing sudo /etc/init.d/php-fastcgi start. To check that it is running run sudo netstat -nplt | grep 9000 and you should see /usr/bin/php5-cgi listed. Now you can continue to configure your webserver.

The package depends on php5-cgi and spawn-fcgi, which is available in Debian testing/squeeze, unstable/sid, along with Ubuntu karmic and lucid. For earlier versions of ubuntu you can change the dependency in debian/control from spawn-fcgi to lighttpd and disable lighttpd once it is installed so you can get spawn-fcgi . I haven't tested this approach and wouldn't recommend it.

You can grab the http://davehall.com.au/sites/davehall.com.au/files/php-fastcgi_0.1-1_all.deb">binary package and install it using dpkg or build it yourself from the source tarball.

For more information on setting up nginx using php-cgi I recommend the linode howto - just skip the "Configure spawn-fcgi" step :)

Packaging Doctrine for Debian and Ubuntu

I have been indoctrinated into to the everything on production machines should be packaged school of thought. Rather than bang on about that, I intend to keep this post relatively short and announce that I have created Debian (and Ubuntu) packages for Doctrine, the ORM for PHP.

The packaging is rather basic, it gets installed just like any other Debianised PEAR package, that being the files go in /usr/share/php, the package.xml and any documentation goes into /usr/share/doc/<package>, and the tests are stored as examples in /usr/share/doc/<package>/examples. The generated package will be called php-doctrine_1.2.1-1_all.deb (or similar), to comply with the Debian convention of naming all PEAR packages php-<pear-package-name>_<version>_<architecture>.deb. I have only packaged 1.2.1, but the files can easily be adapted for other versions, some of the packaging is designed to be version agnostic anyway.

To create your own Doctrine deb, follow these steps:

  • Create a directory, such as ~/packaging/php-doctrine-1.2.1
  • Change into the new directory
  • Download my debian/ tarball and extract it in your php-doctrine-1.2.1 directory
  • Download the PEAR package tarball from the project website and extract it in your php-doctrine-1.2.1 directory
  • If you don't already have a standard Debian build environment setup, set one up by running sudo apt-get install build-essential
  • To build the package run dpkg-buildpackage -k<your-gpg-key-id> -rfakeroot . If you don't have a gpg key drop the "-k<your-gpg-key-id>" from the command

Now you should have a shiny new Doctrine deb. I think the best way to deploy it is using apt and private package repository.

Update: @micahg on identi.ca pointed me to a Doctrine ITP for Debian. Hopefully Federico's work will mean I no longer need to maintain my own packaging of Doctrine.

Howto Setup a Private Package Repository with reprepro and nginx

As the number of servers I am responsible for grows, I have been trying to eliminate all non packaged software in production. Although ubuntu and Debian have massive software repositories, there are some things which just aren't available yet or are internal meta packages. Once the packages are built they need to be deployed to servers. The simplest way to do this is to run a private apt repository. There are a few options for building an apt repository, but the most popular and simplest seems to be reprepro. I used Sander Marechal and Lionel Porcheron's reprepro howtos as a basis for getting my repository up and running.

nginx is a lightweight http server (and reverse proxy). It performs very well serving static files, which is perfect for a package repository. I also wanted to minimise the memory footprint of the server, which made nginx appealing.

To install the packages we need, run the following command:

$ sudo apt-get install reprepro nginx 

Then it is time to configure reprepro. First we create our directory structure:

$ sudo mkdir -p /srv/reprepro/ubuntu/{conf,dists,incoming,indices,logs,pool,project,tmp}
$ cd /srv/reprepro/ubuntu/
$ sudo chown -R `whoami` . # changes the repository owner to the current user

Now we need to create some configuration files.

/srv/reprepro/ubuntu/conf/distributions

Origin: Your Name
Label: Your repository name
Codename: karmic
Architectures: i386 amd64 source
Components: main
Description: Description of repository you are creating
SignWith: YOUR-KEY-ID

/srv/reprepro/ubuntu/conf/options

ask-passphrase
basedir .

If you have a package ready to load, add it using the following command:

$ reprepro includedeb karmic /path/to/my-package_0.1-1.deb \
# change /path/to/my-package_0.1-1.deb to the path to your package

Once reprepro is setup and you have some packages loaded, you need to make it so you can serve the files over http. I run an internal dns zone called "internal" and so the package server will be configured to respond to packages.internal. You may need to change the server_name value to match your own environment. Create a file called

/etc/nginx/sites-available/vhost-packages.conf
with the following content:

server {
  listen 80;
  server_name packages.internal;

  access_log /var/log/nginx/packages-access.log;
  error_log /var/log/nginx/packages-error.log;

  location / {
    root /srv/reprepro;
    index index.html;
  }

  location ~ /(.*)/conf {
    deny all;
  }

  location ~ /(.*)/db {
    deny all;
  }
}

Next we need to increase the server_names_hash_bucket_size. Create a file called

/etc/nginx/conf.d/server_names_hash_bucket_size.conf
which should just contain the following line:

server_names_hash_bucket_size 64;

Note: Many sites advocate sticking this value in the http section of the

/etc/nginx/nginx.conf config
file, but in Debian and Ubuntu
/etc/nginx/conf.d/*.conf
is included in the http section. I think my method is cleaner for upgrading and clearly delineates the stock and custom configuration.

To enable and activate the new virtual host run the following commands:

$ cd /etc/nginx/sites-enabled
$ sudo ln -s ../sites-available/packages.internal.conf .
$ sudo service nginx reload

You should get some output that looks like this

Reloading nginx configuration: the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful
nginx.

Now you can add the new repository to your machines. I recommend creating a file called

/etc/apt/sources.list.d/packages.internal.list
and put the following line in the file:

To make the machine aware of the new repository and associated packages, simply run:

$ sudo apt-get update

That's it. Now you have a lightweight package repository with a lightweight webserver - perfect for running in a virtual machine. Depending on your setup you could probably get away with using 256Mb of RAM and a few gig of disk.

Packaging Drush and Dependencies for Debian

Lately I have been trying to avoid non packaged software being installed on production servers. The main reason for this is to make it easier to apply updates. It also makes it easier to deploy new servers with meta packages when everything is pre packaged.

One tool which I am using a lot on production servers is Drupal's command line tool - drush. Drush is awesome it makes managing drupal sites so much easier, especially when it comes to applying updates. Drush is packaged for Debian testing, unstable and lenny backports by Antoine Beaupré (aka anarcat) and will be available in universe for ubuntu lucid. Drush depends on PEAR's Console_Table module and includes some code which automagically installs the dependency from PEAR CVS. The Debianised package includes the PEAR class in the package, which is handy, but if you are building your own debs from CVS or the nightly tarballs, the dependency isn't included. The auto installer only works if it can write to /path/to/drush/includes, which in these cases means calling drush as root, otherwise it spews a few errors about not being able to write the file then dies.

A more packaging friendly approach would be to build a debian package for PEAR Console_Table and have that as a dependency of the drush package in Debian. The problem with this approach is that drush currently only looks in /path/to/drush/includes for the PEAR class. I have submitted a patch which first checks if Table_Console has been installed via the PEAR installer (or other package management tool). Combine this with the Debian source package I have created for Table_Console (see the file attached at the bottom of the post), you can have a modular and apt managed instance of drush, without having to duplicate code.

I have discussed this approach with anarcat, he is supportive and hopefully it will be the approach adopted for drush 3.0.

Update The drush patch has been committed and should be included in 3.0alpha2.

Building Debian (and Ubuntu) Meta Packages

Over the last few weeks I have been building a bunch of Debian packages (aka debs) for a new Ubuntu server roll out. Most of the packages are either updates to existing packages or meta packages. Building meta packages is pretty easy, once you know how.

I will outline how to build a simple package which pulls in a couple of useful packages.

First off we need to create the directory structures and files. I do all of my packaging work in /home/$USER/packaging, with my meta packages living in a directory called meta.

For your first package run the following command

$ mkdir -p /home/$USER/packaging/meta/my-meta/DEBIAN

The key to creating meta packages is the "control" file. I have a very generic package I use for all of my servers, called dhc-base . This pulls in what I consider to be the minimum dependencies needed for a basic server. My

~/packaging/meta/dhc-base/DEBIAN/control
file looks something like this:

Package: dhc-base
Version: 0.1
Section: main
Priority: standard
Architecture: all
Depends: dhc-archive-keyring, fail2ban, iptables, openssh-server, screen, shorewall, ubuntu-minimal
Maintainer: Dave Hall <EMAIL-ADDRESS>
Description: Base install meta package for boxes administered by Dave Hall Consulting

The fields should all be pretty self explanatory. The key one is "Depends" which lists all of the packages which you want your package to pull in. I try to keep the list alphabetical so it is easier to manage.

In my example I pull in some basic things which I use all the time as well as the the gpg signing key for my packages, which I have also packaged - I may blog how to do that one day too.

Now we are ready to build the package. simply run

$ dpkg-deb -b /home/$USER/packaging/meta/my-meta

and wait for dpkg-deb to work its magic. Now you should have a shiny new deb called my-meta.deb sitting in /home/$USER/packaging/meta

If you have a bunch of meta packages to build, it can become tedious to have run the command over an over again, and each time the packages will overwrite the previous version. To save me some effort I wrote a little shell script which build a package, and gives it a nice version number too.

#!/bin/bash # # build-meta - dpkg-deb wrapper script for building meta packages # # Developed by Dave Hall Consulting # # Copyright (c) 2009 Dave Hall Consulting - http://davehall.com.au # # You may freely use and distribute this script as long as the copyright # notice is preserved # function usage { SCRIPT=`basename $0` echo Usage: $SCRIPT package-path output-path } if [ $# != 2 ]; then usage $0 exit 1 fi DIR=$1 OUT=$2 DPKG_DEB=dpkg-deb PKGNAME=`basename $DIR` BUILDREV=`date +%Y%m%d$H%I%S` VERSION=`cat $DIR/DEBIAN/control | awk '$1~/^Version:/{print $2}'` echo "Building $PKGNAME" $DPKG_DEB -b $DIR $OUT/${PKGNAME}_$VERSION-${BUILDREV}_all.deb

The script it pretty simple. It takes to arguments, the path for the package and directory to put the final package in, it will even read the version number from the control file.

To process all of the meta packages at once, simply run:

$for pkg in `find /home/$USER/packaging/meta -maxdepth 1 -type d | egrep -v '(.bzr|.svn|.git)'`; do /path/to/build-meta $pkg /home/$USER/packaging/built; done

Now you should have a nice collection of meta packages to deploy.

If you want to setup your own debian repository for your meta packages, I would recommend reading Ian Lawrence's reprepro howto.

I have found meta packages really simplify the tedious task of setting up common package sets - especially for server roll outs.

Update: If you are storing your meta packages under version control, as you should be, there is a problem. If you build the debs direct from a subversion checkout the .svn directory is included - so make sure you svn export your meta packages. Same principle applies for other version control systems.

Offer of the Day

This turned up in my inbox this morning and I thought I would share it with people.

Good day

I have on several occasions received email from some other debian consultants not just you they've all been asking me to introduce debian to every institution in my country; you must understand that though am very interested, we are talking about a number that would almost run into infinity.

It is true that my Government can bear the cost of importing up to 500Million CDs but the fact remains that I personally do not understand the Software or what it's used for, as such I can't propose it to the senate this is one aspect that we have to discuss in detail about, preferably via my private email which am presently using to write you.

Kindly get back to me so we can discuss about this software and it's benefit to the users if it's beneficial then I promise we can impose it on my country just like Microsoft and make money out of it like you proposed but most important is that you get back to me with details.

Best regards

Abubakar Maikafi
Email: [email protected]
Phone: +234-07025419252.

He is obviously after full CD sets of Debian if he wants half a billion CDs. I am not in a position to supply this quanity of discs, but if you are, please feel free to contact Abubakar Maikafi about his needs.

Usually I only get unrelated spam or resumes from Indian coders looking for .NET on Windows work via my Debian Consultants listing. This even slipped past spam assassin and made my morning.

OS X and Macs - the Windows killer?

For the last week I have almost exclusively been using a PowerPC Mac - claimed by Apple to be a great platform just a few years a ago. Personally, I think that Mac OS X is an interesting platform. The mac hasn't grabbed me.

On the up side, OS X (and Darwin) is based on BSD, so it has some good security foundations, it also uses many tools common to Linux, such as bash and CUPS. The 3D desktop effects are kinda cool for the first day, but then just become part of the day to day experience. I am yet to see a real advantage to the OS X 3D desktop.

The Mighty Mouse is pretty slick. The scroll wheel feels very nice and is well positioned. The side buttons for expose are addictive on the first day. A let down is that you have to change your preferences to enable the right button.

I don't claim to understand the whole Mac software management system, but from what I do know, you drop a disk image (a dmx file) into the applications folder in finder and it is installed. Want to remove it? delete the folder. This is pretty neat, once you understand how it works. It reminds me of the klik package management system.

The file open dialog is a crazy hierarchical beast, that works. Jumping between levels in a tree really works. Pity more than 3 levels down it can involve some vertical scrolling and you need to select a file to get its full name if it is too long.

Now for the downsides of using a Mac running OS X.

The keyboard feels awful, this is one of the times I would recommend a Microsoft product, but as MS keyboard feels far better than an Apple Keyboard. The standard mac keyboard feels plasticy and the key travel doesn't feel right. I have used a range of keyboards over the years and the Max keyboard feels awful. Maye apple should rebrand Logitech's kit, like Microsoft does.

My next complaint is key bindings. For ever since I remember, [home] takes you to the start of current line and [end] takes you to the end of the current line. Many apps even ignore the [home]/[end] keys. Windows, GNOME and KDE all bind [alt] [F4] to close window - but not the mac. There are many other standard combinations ignored by Apple. Another annoyance is the apple key - for most things it functions like a [ctrl] on a PC, but not in a shell, then it functions like an apple key and [ctrl] functions like a [ctrl] key under *nix - I have lost track of how many windows i have closed when trying to delete a word in the console (bash fiends know what i mean).

Inconsistent use of key combinations. In the console and some other apps, [apple] [arrow] loops through the windows of the application, but not Apple Mail, it has decided that the combo expands/collapses message threads, very annoying when trying to compose a message while trying to copy and paste from another.

The maximise button doesn't actually maximise. I am not sure if it is up to the application or the window manager, but clicking maximise (the green circle) may increase or decrease the width or height of the window. When I click maximise, I expect the window to be maximised - or at the very least increased in dimensions.

The real deal breakers for me are the [home]/[end] keys, the inconsistent shortcuts and other crazy behaviour of OSX mean that I won't be switching to a Mac anytime soon.

Over the next week I plan to load more FLOSS on the mac, such as Mozilla Thunderbird for email, which will join Mozilla Firefox web browser and gvim - my referred text editor. I doubt this will be enough for me to stick with OS X.

The indigo iMac G3 I landed last week is likely to be running Copland (a PPC port of xubuntu) real soon now. I am still trying to work out what I do with Julie's Apple Powerbook G3, which currently runs Xubuntu 6.06.1 LTS, as ubuntu has dropped support for PowerPC in feisy. Maybe I can find other PowerPC machines to install Copland or Debian onto :)

I am yet to see how OS X is more user friendly and and easier crossgrade path for windows users than a Linux desktop.