Home » 2010 » July

My first Perl 6 code

Rakudo Star, a useful, usable, “early adopter” distribution of Perl 6, was released today. And later this evening a MSI installer for Windows was released.

I immediately installed it on my laptop after I had fetched the installer and started up Rakudo REPL, an interactive Perl 6 shell. After a quick peek in the Using Perl 6 PDF I tried the following code.

  1. > say "test";
  2. test
  3. > my $line = ‘la di da’;
  4. la di da
  5. > $line.split(‘ ‘);
  6. la di da
  7. > $line.split(‘ ‘).join(‘+’);
  8. la+di+da

I know it’s a completely useless example but it’s cool how it just worked on my first try. Think I’ll experiment some more with Rakudo this weekend.

PHP 5.2.14 released but don’t get excited yet

Yesterday PHP 5.2.14 has been released, together with PHP 5.3.3. I used to get excited about PHP updates, but no more. This is because of a number of disappointments I’ve talked about earlier. But that’s not really the reason of why not to get excited.

So, why shouldn’t we be excited about PHP 5.2.14? The changelog contains a lot of security enhancements and bugfixes. There is nothing wrong with this, quite the opposite. It’s good! But there’s one thing that is completely unacceptable to me and that’s the sudden announcement of the end-of-life for PHP 5.2.

This release marks the end of the active support for PHP 5.2. Following this release the PHP 5.2 series will receive no further active bug maintenance. Security fixes for PHP 5.2 might be published on a case by cases basis. All users of PHP 5.2 are encouraged to upgrade to PHP 5.3.

Yes yes I know, we can all happily migrate to PHP 5.4, what’s the big deal huh? Well, how about announcing the end-of-life of a product in advance? I don’t mind it being killed of but at least let your users know in advance. I remember PHP4 being announced to get killed off, giving users a good year afterwards to prepare for the upgrade whilst receiving (possible) security updates. Yes yes I know, they said security fixes might be published on a case by case basis. Keyword here is might.

Tell me, do bigger companies and/or enterprises etc. take this? From my experience this isn’t very much appreciated. Organizations need time to plan these kind of migrations.

For me this is again a sign of PHP failing.

Is there a way to share loaded Perl modules amongst users?

I was wondering if there’s a way to have a Perl process share some commonly used modules (such as DBI, DBIx::Class and Template) amongst different users? I’ve looked at FCGI::Spawn but it didn’t seem to me it was capable of it. What I’d like to do is run a single Perl daemon which has some modules preloaded and is able to spawn other processes (e.g. a Catalyst application), sharing the same memory block for loaded modules. I know that’s possible, Starman does it for example, but that’s meant for a single user.

Instead of a single user I want this process to do what suexec does for Apache. suexec Will only launch a CGI/FastCGI script if the owner and group are defined correctly. Once launched the process will launched by the owner of the script, a normal user.

My current problem (well, not current but in the future it’ll be) is that through suexec you can run many webapplications, like Catalyst, Dancer and Mojolicious. The thing is, the commonly used modules will be loaded by each process. So if there are 3 users with a Catalyst website using Moose, DBIx::Class and Template that means for every user process these modules need to be loaded into memory. They can’t share the same module space (for some modules this would be bad, of course). When running a few dozen of smallish websites this will eat up RAM quickly.

I’ve came across Plack::App::Apache::ActionWrapper which partly solves this problem for a single user with multiple PSGI application. So far I haven’t been able to make it work for a single user, let alone multiple. But I had hoped it would be possible to use a single wrapper like this, preload some modules, and use this single wrapper for all users.

I suppose I have to ponder a bit more about it. Although I wonder if it’s even possible though. Yes, mod_perl can preload modules but this means executing Perl as the user running the webserver process. I prefer PSGI or FastCGI.

How a programming language influences your mood

For over 3,5 years now I’ve been programming professionally in PHP. First in PHP4 and about half a year later we finally converted to PHP5. At first I was excited because I could use a lot of new features PHP5 offered such as proper OOP but also Zend Framework.

I started using Zend Framework from 1.5 (currently it’s at 1.10) and have very mixed feelings about it. Yes, it does have a lot of useful libraries such as MVC, basic ACL, Authorization, Input Validation & Filters, Views (with PHP as the templating language, which it essentially was designed for) and more. But ZF had a steep learning curve for me so far has been successfully used in about 5 big webapplications.

However, PHP’s inconsistent function naming, unpredictable order of expected arguments (needle and haystack anyone?), lack of closures/anonymous functions, namespacing, lexical scope and what not makes using PHP a true nightmare. Yes, PHP 5.3 finally supports closures and namespacing, but have you looked at the syntax for both? What a joke. On top of that, PEAR is even a bigger joke. Is someone using it at all?

Running into these issues day by day and knowing it can be easily solved in Perl is very, very depressing. It has totally taken away my desire to program, influencing my desire to program at all and has made my motivation disappear. I’ve truly been wondering if a programming job is really something I want.

The turnaround came when I had a talk with my boss about why we’d still use PHP for future projects. Yes, code reuse is one them but compared to Perl all the additional code I’ve written in PHP that is being reused already exists in Perl at CPAN. To my understanding the use of backslashes to define a namespace in PHP wasn’t really a design choice, but was forced upon because it was too hard to use the double colon or a dot, like many other languages do. On top of that, last time I checked, PHP6 development has been halted because they just can’t get Unicode to work.

Perl already supports Unicode, Perl doesn’t have a weird separation character for namespaces, Perl supports closures/anonymous subroutines, Perl supports lexical scoping, the core list of functions is small, easy to remember and the order of expected parameters is consistent. And it has CPAN.

I was able to convince my boss to start using Perl for future projects because of these given points for both Perl and PHP. What helped though was that he knew of Perl and we’ve also developed some applications in Perl already. One of them a Wx application, a newsletter mailer and some other small scripts. Difference now is that I can use it for webapplications as well.

Now that I’ve gotten the green light to go with Perl for future projects I’m much happier again at work. I’m more motivated to finish the current PHP projects so I can finally start doing some proper Perl work. Programming Perl makes me happy. Moving from PHP to Perl doesn’t mean we’ll be fully dropping PHP. That would be bad and ignorant. Existing stuff will still be supported, improved and extended with new features.

No matter how much you dislike one of the products you support, it’s part of the job. Having stuff you dislike is good actually, because it makes the fun stuff even more fun. For me, PHP makes Perl more fun :-).

Distributed jobs with Gearman

The first time I heard of Gearman was at Stack Overflow where a question was asked on how to stop workers nicely. On which an excellent joke was made by Cletus: “See now I was going to reply “Bitte halten Sie!” :-)”. Since then Gearman was stuck in my mind. So far I haven’t had the chance to use it for anything, but for my current project Maximus I need to be able to distribute jobs for several tasks such as uploading files, fetching from SCM repositories and so on. And Gearman is perfect for that kind of job.

The Gearman server was originally implemented in Perl but has now moved to C. I’m not sure if they’re still working on the Perl implementation, but the most recent release was in January this year.

In this post I’ll demonstrate how to setup a simple client and worker. The client sends tasks to the Gearman server and the worker registers itself with the server. When the server receives a task it checks if there’s a worker available and delegates the task to the worker. Once finished with the job the worker notifies the server and in turn the server notifies the client that the requested job is finished. It’s also possible not to wait for the task to be done. It depends on your specific situation and the job that needs to be executed if you want this or not.


First, install the required modules. Note that on Windows Danga::Socket isn’t stable so Gearman might occasionally fail a job. So far I haven’t had any issues with Linux. We’ll also be installing some additional modules that we’re going to use for the client and worker.

$ cpanm Gearman::Server
$ cpanm http://search.cpan.org/CPAN/authors/id/D/DO/DORMANDO/Gearman-1.11.tar.gz
$ cpanm WWW::Mechanize
$ cpanm Archive::Zip
$ cpanm JSON::Any


Now lets make a module that contains some functionality that’ll be used by the worker. We’re making this modular so it’s easier to test these components. Both the client and the worker scripts should be just that, scripts. The functions in this module can fetch the thumbnail links from an Altavista image search page. The amount of pages to scan is limited to 20, but this can be adjusted. It also has a function to archive the downloads directory.

package MyApp::Functions;
use strict;
use warnings;
use Archive::Zip qw(:ERROR_CODES :CONSTANTS);
use Exporter 'import';
use File::Basename;
use File::Spec;
use LWP::Simple;
use WWW::Mechanize;

our @EXPORT_OK = qw(fetch_image_links download_images archive_downloads);

=head2 fetch_image_links

Fetch all image (thumbnail) links from the search results page
sub fetch_image_links {
	my($keyword, $limit) = @_;
	$keyword ||= 'perl';
	$limit ||= 20;

	my $mech = WWW::Mechanize->new;
	$mech->get('http://www.altavista.com/image/results?q=' . $keyword);

	my $count = 0;
	my @images;
	do {
		foreach my $image( $mech->images ) {
			next if $image->tag() ne 'img';
			next unless index($image->url(), 'nimage') > 0;
			push @images, $image->url();
	while( $count < $limit && $mech->follow_link( text_regex => qr/>>/ ) );

	return @images;

=head2 download_images

Download every supplied image from the list

The thumbnails from Altavista are in JPEG format. If you choose to use and or
modify this code for your own needs be sure to safe the file in the correct
sub download_images {
	my @images = @_;

	mkdir('download') unless -d 'download';

	foreach(@images) {
		my $filepath = File::Spec->catfile('download', basename($_) . '.jpg');
		next if -f $filepath;

		my $content = get($_);
		open my $fh, '>', $filepath or die($!);
		binmode $fh;
		print $fh $content;
		close $fh;

=head2 archive_downloads

Archive the download directory
sub archive_downloads {
	my $zip = Archive::Zip->new();
	my $name = 'backup-' . time();
	$zip->addTree('download' , $name);

	my $status = $zip->writeToFileNamed( $name . '.zip' );
	die "Archiving failed!" if $status != AZ_OK;



Now that we’ve got our functionality in place it’s time to setup the worker. This worker provides 2 functions. fetch_thumbnails will do a search, collect the thumbnail links and will download them to the download directory. archive_downloads will create a Zip archive with the contents of the downloads directory.

use strict;
use warnings;
use lib './lib';
use Gearman::Worker;
use MyApp::Functions qw(fetch_image_links download_images archive_downloads);
use JSON::Any;

my $worker = Gearman::Worker->new;

# Using JSON to unserialize arguments
my $json = JSON::Any->new;

# fetch_thumbnails: Search and fetch thumbnails
$worker->register_function('fetch_thumbnails', sub {
	my @images = fetch_image_links( @{$json->decode($_[0]->arg)} );
	download_images( @images );

# archive_downloads: Archive download directory
$worker->register_function('archive_downloads', \&archive_downloads);

$worker->work while 1;


A worker needs to have something to do, so lets create a client that can dispatch tasks to it. Our client will create a taskset, containing the fetch_thumbnails and archive_downloads tasks. This taskset is being sent to the server, which distributes it to the workers. When the workers are done the client gets a signal to continue. After this we’ll execute another task to create another archive. With do_task the client will wait for the task to finish. Finally, another archive_downloads task is being dispatched to the background. The client will instantly finish after this, even if the task isn’t finished yet. Still, the task is being executed by a worker when available.

use strict;
use warnings;
use lib './lib';
use Gearman::Client;
use JSON::Any;

my $client = Gearman::Client->new;

# Using JSON to serialize arguments
my $json = JSON::Any->new;

# Create a taskset that will search and download thumbnails and finally archives
# it to a ZIP-file.
my $taskset = $client->new_task_set;

$taskset->add_task('fetch_thumbnails', $json->encode(['perl', 5]), {
	on_complete => sub {
		print "Downloaded all thumbnails\n";

# Run the taskset and wait for it to complete

# Create an archive and wait for it

# And create a third archive, but don't for this one

Try it!

With all code in place open 3 terminals. 1 For gearmand, 1 for the worker and 1 for the script. Start them in the given order.

$ gearmand --debug=1
$ perl worker.pl
$ perl client.pl

The terminal running the client.pl should display the following:

Downloaded all thumbnails

Check the downloads directory to see all the thumbnails. The root directory of the scripts should contain 2 zip archives. If not then the jobs executed and finished the same second. Congratulations, you have a very fast machine! The name of the archive contains a timestamp. Change the code to use something more unique.

Please do note…

Do note that the example code won’t really work with multiple clients and workers. That’s because thumbnails are being stored inside the same directory and the archive_downloads task simply creates a snapshot of the download directory. I’ll leave it up to you as an exercise to figure this out. This post would simply become too long to cover this. This is merely an introduction to Gearman.

As a final note, I do realize we have Storable for serialization, but somehow it got borked on my Strawberry Perl installation.


There’s also a Plack/PSGI script available to retrieve Gearman statistics. It’s available on GitHub. I haven’t tried it so I don’t know if it’s even compatible with the Perl server implementation. But I thought it was worth to mention it.


All code supplied here is licensed under the MIT license.