ubuntu

Fixing Zimbra's Broken debs

As much as I love Zimbra, I find their Debian packaging frustrating. Why do they insist on shipping half broken debs? I can excuse vmware for being too lazy to provide proper descriptions for their packages, although the generic "Best email money can buy" text seems a little lame. Failing to populate the "Provides" field is brain dead. This makes it possible to install mailx on a server running Zimbra without installing another MTA.

I've created a simple workaround deb which provides mail-transport-agent and depends on zimbra-mta. The deb also symlinks the zimbra sendmail binary to /usr/sbin/sendmail - where it belongs. Now mailx and other tools which depend on mail-transport-agent can be installed. The package should work with both Debian and Ubuntu.

The source available on github, or you can download a prebuilt platform independent deb from github's download manager. The package is released under the terms of the WTFPLv2.

I hope that Zimbra builds better debs and makes this package obsolete.

$100 Drupal Site Series: Part 4 - Platforms

So far in this series we have covered a potential target market and business plan, resources and infrastructure and the tools required to deliver Drupal sites with a sale price of $100 per site. In this post I'll be covering some of the considerations when building Drupal platforms or distributions.

The sites which customers deploy will need to be based on a custom Drupal distribution or "distro". The distro should be modular and primarily driven by Features.

Customers shouldn't have to know anything about administering Drupal when they first buy their site. A customer should be able to turn functionality on and off as they want, through a simple user interface.

Features

The platform should contain a good collection of Features. The following list is an example of what you might offer customers:

  • Contact Form
  • Image Gallery
  • Products
  • Services
  • "Static" Pages
  • Blog
  • News
  • Mailing Lists
  • Social Network Integration
  • Office / Store Locations
  • Staff Profiles

When developing your list of things to include in the site, think in terms of functionality a small business would want, not what modules you should be using. The list of modules should be derived from the functionality, not the other way around.

As the features included in the platform will be modular and generically useful, you should consider releasing them publicly, via your own features server or drupal.org as full modules.

On top of the features listed above you will probably need to include some custom glue code to enhance the user experience. In my first post in this series I discussed the target audience not having high level computer skills, so the user interface should take this into account. Some of the language might need to be changed or form options modified to use sane defaults and some might even be hidden from the user.

Security

As each server may have hundreds or even thousands of sites running on it, security will be an important consideration. Like with all servers you should ensure it is properly locked down and only running the services you need. Apache should be configured to block access to most things except index.php and relevant client side files (images, css, js) and the files directory. At the Drupal level you should make sure that things like the PHP module aren't enabled and secure coding practices are adhered to. The user account given to your customer shouldn't be user 1, they should be user 2, with restricted permissions that only gives them access to what they need.

I strongly recommend that you read Cracking Drupal by Greg Knaddison.

Sales and Support

In order to attract customers you will need a site to promote the service and allow customers to sign up and hand over their credit card details. Drupal now offers 3 ecommerce projects, Drupal e-Commerce, Drupal Commerce and ubercart, you should investigate which of these best suits your needs. The sales system will need some custom code to hook into Aegir, which will be managing the actual site deployments. The sales and support platform/s should be managed in a similar manner to the customer sites.

Once you have paying customers, you will also need to provide them with some resources such as detailed documentation, video walk throughs, forums and possibly a ticketing system. This site can either be part of the sales site or a separate site. In the next instalment I'll cover support in more detail.

Deploying Platforms

We need to keep the whole process very automated, CPU cycles are a lot cheaper than workers. Building and deploying platforms should involve a few clicks or tweaking a configuration file. For example platforms could be built as Debian (or Ubuntu) packages (aka debs) using an automated build process that runs a test suite against the code base before building a deb. The debs could then be deployed using puppet and a post installation script can notify Aegir that it has been installed successfully. The whole process could involve very little human interaction. Migrating client sites to upgraded platforms could also be automated using a simple script which adds a migrate task for each site.

What's Next?

Now that we have the service almost ready to go, we should look into how we are going to get customers to part with their cash and how we will support them once they have paid.

Hello Planet Ubuntu Users and Ubuntu Universe

One of my new (financial) year resolutions was to increase the readership of my blog. As part of this plan I have been trying to get my blog syndicated on relevant planets. The latest on my list has been Planet Ubuntu Users and the associated Ubuntu Universe. About 24 hours ago my blog was added to both aggregators. Thanks Tiago!

I run Ubuntu on my Dell D830 laptop (my primary machine). I run various flavours and versions of Ubuntu on desktops, servers and VMs both for my business and clients.

In 2009 I converted the local community run internet cafe to Ubuntu. When they were running XP there were problems almost weekly, since the switch, the only real issue they have had was a failed automatic update for firefox last week.

As for off the clock Linux time, my kids associate tux with computers, while my partner is more than happy running LTS releases on her laptop. The only desktop OS my mother in law has used is Ubuntu Linux.

I have been active in the Australian Ubuntu LoCo for quite a few years.

My Ubuntu related posts are generally technical howtos. Most of the stuff is system administration related with a few Ubuntu home brew packaging and webapp development tips.

The non tagged feed of this blog contains a good mix of geeky stuff about whatever we are working on at DHC, rants about annoying things and the occasional thing from real life. I hope there is something there that you find useful. Happy reading.

Multi Core Apache Solr on Ubuntu 10.04 for Drupal with Auto Provisioning

Apache Solr is an excellent full text index search engine based on Lucene. Solr is increasingly being used in the Drupal community for search. I use it for search for a lot of my projects. Recently Steve Edwards at Drupal Connect blogged about setting up a mutli core Solr server on Ubuntu 9.10 (aka Karmic). Ubuntu 10.04LTS was released a couple of months ago and it makes the process a bit easier, as Apache Solr 1.4 has been packaged. An additional advantage of using 10.04LTS is that it is supported until April 2015, whereas suppport for 9.10 ends in 10 months - April 2011.

As an added bonus in this howto you will be able to auto provision solr cores just by calling the right URL.

In this tutorial I will be using Jetty rather than tomcat which some tutorials recommend, as Jetty performs well and generally uses less resources.

Install Solr and Jetty

Installing jetty and Solr just requires a simple command

$ sudo apt-get install solr-jetty openjdk-6-jdk

This will pull down Solr and all of the dependencies, which can be alot if you have a very stripped down base server.

Configuring Jetty

Configuring Jetty is very straight forward. First we backup the existing /etc/default/jetty file like so:

sudo cp -a /etc/default/jetty /etc/default/jetty.bak

Then simply change your /etc/default/jetty to be like this (the changes are highlighted):

# Defaults for jetty see /etc/init.d/jetty for more

# change to 0 to allow Jetty to start
NO_START=0
#NO_START=1

# change to 'no' or uncomment to use the default setting in /etc/default/rcS 
VERBOSE=yes

# Run Jetty as this user ID (default: jetty)
# Set this to an empty string to prevent Jetty from starting automatically
#JETTY_USER=jetty

# Listen to connections from this network host (leave empty to accept all connections)
#Uncomment to restrict access to localhost
#JETTY_HOST=$(uname -n)
JETTY_HOST=solr.example.com

# The network port used by Jetty
#JETTY_PORT=8080

# Timeout in seconds for the shutdown of all webapps
#JETTY_SHUTDOWN=30

# Additional arguments to pass to Jetty    
#JETTY_ARGS=

# Extra options to pass to the JVM         
#JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true"

# Home of Java installation.
#JAVA_HOME=

# The first existing directory is used for JAVA_HOME (if JAVA_HOME is not
# defined in /etc/default/jetty). Should contain a list of space separated directories.
#JDK_DIRS="/usr/lib/jvm/default-java /usr/lib/jvm/java-6-sun"

# Java compiler to use for translating JavaServer Pages (JSPs). You can use all
# compilers that are accepted by Ant's build.compiler property.
#JSP_COMPILER=jikes

# Jetty uses a directory to store temporary files like unpacked webapps
#JETTY_TMP=/var/cache/jetty

# Jetty uses a config file to setup its boot classpath
#JETTY_START_CONFIG=/etc/jetty/start.config

# Default for number of days to keep old log files in /var/log/jetty/
#LOGFILE_DAYS=14

If you don't include the JETTY_HOST entry Jetty will only bind to the local loopback interface, which is all you need if your drupal webserver is running on the same machine. If you set the JETTY_HOST make sure you configure your firewall to restrict access to the Solr server.

Configuring Solr

I am assuming you have already installed the Apache Solr module for Drupal somewhere. If you haven't, do that now, as you will need some config files which ship with it.

First we enable the multicore support in Solr by creating a file called /usr/share/solr/solr.xml with the following contents:

<solr persistent="true" sharedLib="lib">
 <cores adminPath="/admin/cores" shareSchema="true" adminHandler="au.com.davehall.solr.plugins.SolrCoreAdminHandler">
 </cores>
</solr>

You need to make sure the file is owned by the jetty user if you want it to be dymanically updated, otherwise change persistent="true" to persistent="false", don't include the adminHandler attribute and don't run the commands below. Also if you want to auto provision cores you will need to download the jar file attached to this post and drop it into the /usr/share/solr/lib directory (which you'll need to create).

sudo chown jetty:jetty /usr/share/solr
sudo chown jetty:jetty /usr/share/solr/solr.xml
sudo chmod 640 /usr/share/solr/solr.xml
sudo mkdir /usr/share/solr/cores
sudo chown jetty:jetty /usr/share/solr/cores

To keep your configuration centralised, symlink the file from /usr/share/solr to /etc/solr. Don't do it the other way, Solr will ignore the symlink.

sudo ln -s /usr/share/solr/solr.xml /etc/solr/

Solr needs to be configured for Drupal. First we backup the existing config file, just in case, like so:

sudo mv /etc/solr/conf/schema.xml /etc/solr/conf/schema.orig.xml
sudo mv /etc/solr/conf/solrconfig.xml /etc/solr/conf/solrconfig.orig.xml

Now we copy the Drupal Solr config files from where you installed the module

sudo cp /path/to/drupal-install/sites/all/modules/contrib/apachesolr/{schema,solrconfig}.xml /etc/solr/conf/

Solr needs the path to exist for each core's data files, so we create them with the following commands:

sudo mkdir -p /var/lib/solr/cores/{,subdomain_}example_com/{data,conf}
sudo chown -R jetty:jetty /var/lib/solr/cores/{,subdomain_}example_com

Each of the cores need their own configuration files. We could implement some hacks to use a common set of configuration files, but that will make life more difficult if we ever have to migrate some of cores. Just copy the common configuration for all the cores:

sudo bash -c 'for core in /var/lib/solr/cores/*; do cp -a /etc/solr/conf/ $core/; done'

If everything is configured correctly, we should just be able to start Jetty like so:

sudo /etc/init.d/jetty start

If you visit http://solr.example.com:8080/solr/admin/cores?action=STATUS you should get some xml that looks something like this:

<?xml version="1.0" encoding="UTF-8"?>
<response>
	<lst name="responseHeader">
		<int name="status">0</int>
		<int name="QTime">0</int>
	</lst>
	<lst name="status"/>
</response>

If you get the above output everything is working properly

If you enabled auto provisioning of Solr cores, you should now be able to create your first core. Point your browser at http://solr.example.com:8080/solr/admin/cores?action=CREATE&name=test1&i... If it works you should get output similar to the following:

<?xml version="1.0" encoding="UTF-8"?>
<response>
	<lst name="responseHeader">
		<int name="status">0</int>
		<int name="QTime">1561</int>
	</lst>
	<str name="core">test1</str>
	<str name="saved">/usr/share/solr/solr.xml</str>
</response>

I would recommend using identifiable names for your cores, so for davehall.com.au I would call the core, "davehall_com_au" so I can easily find it later on.

Security Note: As anyone who can access your server can now provision solr cores, make sure you restrict access to port 8080 to only allow access from trusted IP addresses.

For more information on the commands available, refer to the Solr Core Admin API documenation on the Solr wik.

Next in this series will be how to use this auto provisioning setup to allow aegir to provision solr cores as sites are created.

Making it Easier to Spawn php-cgi on Debian and Ubuntu

Apache is a great web server, but sometimes I need something a bit more lightweight. I already have a bunch of sites using lighttpd, but I'm planning on switching them to nginx. Both nginx and lighttpd use FastCGI for running php. Getting FastCGI up and running on Ubuntu (or Debian) involves a bit of manual work which can slow down deployment.

The normal process to get nginx and php-cgi up and running is to install the spawn-fcgi package, create a shell script such as /usr/local/bin/php-fastcgi to launch it, then a custom init script, after making both of these executable you need to run update-rc.d then finally should be right to go. Each of these manual steps increases the likelihood of mistakes being made.

Instead, I created a deb contains a configurable init script. It is pretty simple, the init script calls spawn-fcgi with the appropriate arguments. All of the configuration is handled in /etc/default/php-fastcgi. The main config options are:

  • ENABLED - Enable (or disable) the init script. default: 0 (disabled)
  • ADDRESS - The IP address to bind to. default: 127.0.0.1
  • PORT - The TCP Port to bind to. default: 9000
  • USER - The user the php scripts will be excuted as. default: www-data
  • GROUP - The group the php scripts will be executed as. default: www-data
  • PHP_CGI - The full path to the php-cgi binary. default: /usr/bin/php5-cgi

The last 3 variables are not in the defaults file as I didn't think many users would want/need to change them, feel free to add them in if you need them.

Once you have set ENABLED to 1, launch the init scipt by executing sudo /etc/init.d/php-fastcgi start. To check that it is running run sudo netstat -nplt | grep 9000 and you should see /usr/bin/php5-cgi listed. Now you can continue to configure your webserver.

The package depends on php5-cgi and spawn-fcgi, which is available in Debian testing/squeeze, unstable/sid, along with Ubuntu karmic and lucid. For earlier versions of ubuntu you can change the dependency in debian/control from spawn-fcgi to lighttpd and disable lighttpd once it is installed so you can get spawn-fcgi . I haven't tested this approach and wouldn't recommend it.

You can grab the http://davehall.com.au/sites/davehall.com.au/files/php-fastcgi_0.1-1_all.deb">binary package and install it using dpkg or build it yourself from the source tarball.

For more information on setting up nginx using php-cgi I recommend the linode howto - just skip the "Configure spawn-fcgi" step :)

Packaging Doctrine for Debian and Ubuntu

I have been indoctrinated into to the everything on production machines should be packaged school of thought. Rather than bang on about that, I intend to keep this post relatively short and announce that I have created Debian (and Ubuntu) packages for Doctrine, the ORM for PHP.

The packaging is rather basic, it gets installed just like any other Debianised PEAR package, that being the files go in /usr/share/php, the package.xml and any documentation goes into /usr/share/doc/<package>, and the tests are stored as examples in /usr/share/doc/<package>/examples. The generated package will be called php-doctrine_1.2.1-1_all.deb (or similar), to comply with the Debian convention of naming all PEAR packages php-<pear-package-name>_<version>_<architecture>.deb. I have only packaged 1.2.1, but the files can easily be adapted for other versions, some of the packaging is designed to be version agnostic anyway.

To create your own Doctrine deb, follow these steps:

  • Create a directory, such as ~/packaging/php-doctrine-1.2.1
  • Change into the new directory
  • Download my debian/ tarball and extract it in your php-doctrine-1.2.1 directory
  • Download the PEAR package tarball from the project website and extract it in your php-doctrine-1.2.1 directory
  • If you don't already have a standard Debian build environment setup, set one up by running sudo apt-get install build-essential
  • To build the package run dpkg-buildpackage -k<your-gpg-key-id> -rfakeroot . If you don't have a gpg key drop the "-k<your-gpg-key-id>" from the command

Now you should have a shiny new Doctrine deb. I think the best way to deploy it is using apt and private package repository.

Update: @micahg on identi.ca pointed me to a Doctrine ITP for Debian. Hopefully Federico's work will mean I no longer need to maintain my own packaging of Doctrine.

Howto Setup a Private Package Repository with reprepro and nginx

As the number of servers I am responsible for grows, I have been trying to eliminate all non packaged software in production. Although ubuntu and Debian have massive software repositories, there are some things which just aren't available yet or are internal meta packages. Once the packages are built they need to be deployed to servers. The simplest way to do this is to run a private apt repository. There are a few options for building an apt repository, but the most popular and simplest seems to be reprepro. I used Sander Marechal and Lionel Porcheron's reprepro howtos as a basis for getting my repository up and running.

nginx is a lightweight http server (and reverse proxy). It performs very well serving static files, which is perfect for a package repository. I also wanted to minimise the memory footprint of the server, which made nginx appealing.

To install the packages we need, run the following command:

$ sudo apt-get install reprepro nginx 

Then it is time to configure reprepro. First we create our directory structure:

$ sudo mkdir -p /srv/reprepro/ubuntu/{conf,dists,incoming,indices,logs,pool,project,tmp}
$ cd /srv/reprepro/ubuntu/
$ sudo chown -R `whoami` . # changes the repository owner to the current user

Now we need to create some configuration files.

/srv/reprepro/ubuntu/conf/distributions

Origin: Your Name
Label: Your repository name
Codename: karmic
Architectures: i386 amd64 source
Components: main
Description: Description of repository you are creating
SignWith: YOUR-KEY-ID

/srv/reprepro/ubuntu/conf/options

ask-passphrase
basedir .

If you have a package ready to load, add it using the following command:

$ reprepro includedeb karmic /path/to/my-package_0.1-1.deb \
# change /path/to/my-package_0.1-1.deb to the path to your package

Once reprepro is setup and you have some packages loaded, you need to make it so you can serve the files over http. I run an internal dns zone called "internal" and so the package server will be configured to respond to packages.internal. You may need to change the server_name value to match your own environment. Create a file called

/etc/nginx/sites-available/vhost-packages.conf
with the following content:

server {
  listen 80;
  server_name packages.internal;

  access_log /var/log/nginx/packages-access.log;
  error_log /var/log/nginx/packages-error.log;

  location / {
    root /srv/reprepro;
    index index.html;
  }

  location ~ /(.*)/conf {
    deny all;
  }

  location ~ /(.*)/db {
    deny all;
  }
}

Next we need to increase the server_names_hash_bucket_size. Create a file called

/etc/nginx/conf.d/server_names_hash_bucket_size.conf
which should just contain the following line:

server_names_hash_bucket_size 64;

Note: Many sites advocate sticking this value in the http section of the

/etc/nginx/nginx.conf config
file, but in Debian and Ubuntu
/etc/nginx/conf.d/*.conf
is included in the http section. I think my method is cleaner for upgrading and clearly delineates the stock and custom configuration.

To enable and activate the new virtual host run the following commands:

$ cd /etc/nginx/sites-enabled
$ sudo ln -s ../sites-available/packages.internal.conf .
$ sudo service nginx reload

You should get some output that looks like this

Reloading nginx configuration: the configuration file /etc/nginx/nginx.conf syntax is ok
configuration file /etc/nginx/nginx.conf test is successful
nginx.

Now you can add the new repository to your machines. I recommend creating a file called

/etc/apt/sources.list.d/packages.internal.list
and put the following line in the file:

To make the machine aware of the new repository and associated packages, simply run:

$ sudo apt-get update

That's it. Now you have a lightweight package repository with a lightweight webserver - perfect for running in a virtual machine. Depending on your setup you could probably get away with using 256Mb of RAM and a few gig of disk.

Packaging Drush and Dependencies for Debian

Lately I have been trying to avoid non packaged software being installed on production servers. The main reason for this is to make it easier to apply updates. It also makes it easier to deploy new servers with meta packages when everything is pre packaged.

One tool which I am using a lot on production servers is Drupal's command line tool - drush. Drush is awesome it makes managing drupal sites so much easier, especially when it comes to applying updates. Drush is packaged for Debian testing, unstable and lenny backports by Antoine Beaupré (aka anarcat) and will be available in universe for ubuntu lucid. Drush depends on PEAR's Console_Table module and includes some code which automagically installs the dependency from PEAR CVS. The Debianised package includes the PEAR class in the package, which is handy, but if you are building your own debs from CVS or the nightly tarballs, the dependency isn't included. The auto installer only works if it can write to /path/to/drush/includes, which in these cases means calling drush as root, otherwise it spews a few errors about not being able to write the file then dies.

A more packaging friendly approach would be to build a debian package for PEAR Console_Table and have that as a dependency of the drush package in Debian. The problem with this approach is that drush currently only looks in /path/to/drush/includes for the PEAR class. I have submitted a patch which first checks if Table_Console has been installed via the PEAR installer (or other package management tool). Combine this with the Debian source package I have created for Table_Console (see the file attached at the bottom of the post), you can have a modular and apt managed instance of drush, without having to duplicate code.

I have discussed this approach with anarcat, he is supportive and hopefully it will be the approach adopted for drush 3.0.

Update The drush patch has been committed and should be included in 3.0alpha2.

DRBD on Ubuntu Karmic

Ubuntu 9.10 (aka karmic koala) has a frustrating packaging bug. Even though the stock server kernel includes the DRBD module, the drbd8-utils package depends on drbd8-source. drbd8-source uses DKMS to build the drbd module to match the installed kernel/s. As I stated in the bug report (lp:474660), "really don't like having build-essential installed on production net facing servers, and where possible any productions servers".

As side from personal opinion on whether the module should be bundled or not, the fact is that is bundled and so there is no need for the dependency on drbd8-source. As a work around I have added a meta package to provide drbd8-source, so I don't need to install build-essential and build the module every time a new kernel is installed.

After a quick test it is working well. Here is the

DEBIAN/control

file I used to make it all happen.

Package: dhc-drbd8-source-hack
Version: 0.1
Section: meta
Priority: optional
Architecture: all
Provides: drbd8-source
Installed-Size:
Maintainer: Dave Hall <[email protected]>
Description: Package to hack around drbd8-source dependency for drbd8-utils

If you are unsure how to use the control file above, see my recent blog post on building meta packages for ubuntu and debian.

Setting up a private package repository is outside the scope of this post. If you want to set one up, I would recommend Sander Marechal's slightly dated howto - Setting up and managing an APT repository with reprepro. With a few changes I found it worked well.

If you have your own repository running you can simply run

sudo apt-get install dhc-drbd8-source-hack drbd8-utils

, if you don't you can run the following commands

sudo dpkg -i /path/to/dhc-drbd8-source-hack*.deb && sudo apt-get install drbd8-utils

. Either way you should now have drbd8-utils installed on ubuntu karmic without having to install the redundant drbd8-source package.

To take it a step further you could build a meta package to install both drbd8 packages and allow you to have a potentially smoother upgrade to lucid. The meta package would contain the following line

Depends: dhc-drbd8-source-hack drbd8-utils

This is similar to what I now have in my HA server meta package.

Building Debian (and Ubuntu) Meta Packages

Over the last few weeks I have been building a bunch of Debian packages (aka debs) for a new Ubuntu server roll out. Most of the packages are either updates to existing packages or meta packages. Building meta packages is pretty easy, once you know how.

I will outline how to build a simple package which pulls in a couple of useful packages.

First off we need to create the directory structures and files. I do all of my packaging work in /home/$USER/packaging, with my meta packages living in a directory called meta.

For your first package run the following command

$ mkdir -p /home/$USER/packaging/meta/my-meta/DEBIAN

The key to creating meta packages is the "control" file. I have a very generic package I use for all of my servers, called dhc-base . This pulls in what I consider to be the minimum dependencies needed for a basic server. My

~/packaging/meta/dhc-base/DEBIAN/control
file looks something like this:

Package: dhc-base
Version: 0.1
Section: main
Priority: standard
Architecture: all
Depends: dhc-archive-keyring, fail2ban, iptables, openssh-server, screen, shorewall, ubuntu-minimal
Maintainer: Dave Hall <EMAIL-ADDRESS>
Description: Base install meta package for boxes administered by Dave Hall Consulting

The fields should all be pretty self explanatory. The key one is "Depends" which lists all of the packages which you want your package to pull in. I try to keep the list alphabetical so it is easier to manage.

In my example I pull in some basic things which I use all the time as well as the the gpg signing key for my packages, which I have also packaged - I may blog how to do that one day too.

Now we are ready to build the package. simply run

$ dpkg-deb -b /home/$USER/packaging/meta/my-meta

and wait for dpkg-deb to work its magic. Now you should have a shiny new deb called my-meta.deb sitting in /home/$USER/packaging/meta

If you have a bunch of meta packages to build, it can become tedious to have run the command over an over again, and each time the packages will overwrite the previous version. To save me some effort I wrote a little shell script which build a package, and gives it a nice version number too.

#!/bin/bash # # build-meta - dpkg-deb wrapper script for building meta packages # # Developed by Dave Hall Consulting # # Copyright (c) 2009 Dave Hall Consulting - http://davehall.com.au # # You may freely use and distribute this script as long as the copyright # notice is preserved # function usage { SCRIPT=`basename $0` echo Usage: $SCRIPT package-path output-path } if [ $# != 2 ]; then usage $0 exit 1 fi DIR=$1 OUT=$2 DPKG_DEB=dpkg-deb PKGNAME=`basename $DIR` BUILDREV=`date +%Y%m%d$H%I%S` VERSION=`cat $DIR/DEBIAN/control | awk '$1~/^Version:/{print $2}'` echo "Building $PKGNAME" $DPKG_DEB -b $DIR $OUT/${PKGNAME}_$VERSION-${BUILDREV}_all.deb

The script it pretty simple. It takes to arguments, the path for the package and directory to put the final package in, it will even read the version number from the control file.

To process all of the meta packages at once, simply run:

$for pkg in `find /home/$USER/packaging/meta -maxdepth 1 -type d | egrep -v '(.bzr|.svn|.git)'`; do /path/to/build-meta $pkg /home/$USER/packaging/built; done

Now you should have a nice collection of meta packages to deploy.

If you want to setup your own debian repository for your meta packages, I would recommend reading Ian Lawrence's reprepro howto.

I have found meta packages really simplify the tedious task of setting up common package sets - especially for server roll outs.

Update: If you are storing your meta packages under version control, as you should be, there is a problem. If you build the debs direct from a subversion checkout the .svn directory is included - so make sure you svn export your meta packages. Same principle applies for other version control systems.