Home » Posts tagged "ironman" (Page 2)

My first proper introduction with Mojolicious

The first time I took a look at Mojolicious was almost a year ago if I’m not mistaken. My Perl was really rusty back then and the Mojolicious’ documentation, or rather the lack of it, didn’t help much either. Still, with interest I followed the project through GitHub. Development on the project, and especially the last few months has been very active. The author, Sebastian Riedel, seems very devoted to the project and his productivity is very inspiring.

By now Mojolicious has matured a lot in my eyes. Documentation is a lot better than it was and is still being improved. To my understanding he’s nearing a feature freeze which means a proper stable 1.0 version will be out soon. This also means for me that I can start using Mojolicious without the agony of backwards incompatibility with every small update.

What I like about Mojolicious is that its pretty lightweight, has no dependencies besides core modules and is already supporting HTML5 features like Websockets. Now, I don’t know yet precisely what Websockets are good for (the Wikipedia entry isn’t very helpful either), but it shows that Mojolicious is geared toward support for modern techniques. To my understanding it allows for the browser to keep a connection open with the webapplication so you don’t need a polling system to retrieve updates. Which is something Ajax is being abused for at the moment.

Because Mojolicious doesn’t use any other modules it has its own templating system as well. It’s pretty feature complete, but I can’t coop with the syntax. Others have blogged about this as well. Templates look horrible and are unreadable. Which is a shame. A more Template-Toolkit like syntax would’ve been nicer. But that would also mean reinventing the wheel, again. Which is what Mojolicious is doing quite a bit (e.g. Mojo::JSON). Nothing wrong with that though, as on of the key points for Mojolicious is to be independent of other modules, except for core modules.

Luckily, it’s easy enough to replace the default renderer with another one. MojoX::Renderer::TT nicely wraps up Template-Toolkit. Inside the startup method of your application it’s easy enough to set it up as the default renderer.

One tiny thing I’m missing, and maybe I’ve overlooked it, is a standard way of using configuration files. I’m used to the Catalyst way of doing this, which is a breeze, but so far haven’t found a standard way to do this in Mojolicious. I suppose I can easily fix this with using Config::Any myself.

So far my hands-on with Mojolicious has been small, but it did the job for me. All I needed was a simple web application that can log usage through a REST API. Together with Test::Mojo writing tests was a breeze. I did have to go through the documentation quite a bit to figure out where and what everything was. But this is no different with any other framework you’re learning. Whilst testing I had run into a tiny problem which turned out to be my fault, not Mojolicious’. Still, Sebastian was kind enough to promptly respond to my question at IRC and was very helpful.

I didn’t use Mojolicious::Lite by the way. Whilst it would’ve worked perfectly for my use case I don’t understand yet how it’s easy to convert a Mojolicious::Lite webapplication to a normal structured Mojolicious webapplication. It would’ve made implementing the REST API easier since Mojolicious::Lite has routines for setting up PUT, POST, GET, DELETE and HEAD requests. But since the webapplication will scale to something bigger in the future I don’t want to rewrite the Lite edition to a structured version. I don’t have time for that.

Compared to my experience from a year ago it was fun to use Mojolicious this time. I’m certainly going to use it for more small and low traffic webapplications. For now I’ll stick with Catalyst for the big webapplications. But that’s because there are already a lot of extra modules for it, has excellent documentation, it’s highly configurable and I bought an expensive Catalyst book (don’t worry, I like it and ordered it through the Catalyst website so the Enlightened Perl Organisation gets a donation).

Is there a way to share loaded Perl modules amongst users?

I was wondering if there’s a way to have a Perl process share some commonly used modules (such as DBI, DBIx::Class and Template) amongst different users? I’ve looked at FCGI::Spawn but it didn’t seem to me it was capable of it. What I’d like to do is run a single Perl daemon which has some modules preloaded and is able to spawn other processes (e.g. a Catalyst application), sharing the same memory block for loaded modules. I know that’s possible, Starman does it for example, but that’s meant for a single user.

Instead of a single user I want this process to do what suexec does for Apache. suexec Will only launch a CGI/FastCGI script if the owner and group are defined correctly. Once launched the process will launched by the owner of the script, a normal user.

My current problem (well, not current but in the future it’ll be) is that through suexec you can run many webapplications, like Catalyst, Dancer and Mojolicious. The thing is, the commonly used modules will be loaded by each process. So if there are 3 users with a Catalyst website using Moose, DBIx::Class and Template that means for every user process these modules need to be loaded into memory. They can’t share the same module space (for some modules this would be bad, of course). When running a few dozen of smallish websites this will eat up RAM quickly.

I’ve came across Plack::App::Apache::ActionWrapper which partly solves this problem for a single user with multiple PSGI application. So far I haven’t been able to make it work for a single user, let alone multiple. But I had hoped it would be possible to use a single wrapper like this, preload some modules, and use this single wrapper for all users.

I suppose I have to ponder a bit more about it. Although I wonder if it’s even possible though. Yes, mod_perl can preload modules but this means executing Perl as the user running the webserver process. I prefer PSGI or FastCGI.

How a programming language influences your mood

For over 3,5 years now I’ve been programming professionally in PHP. First in PHP4 and about half a year later we finally converted to PHP5. At first I was excited because I could use a lot of new features PHP5 offered such as proper OOP but also Zend Framework.

I started using Zend Framework from 1.5 (currently it’s at 1.10) and have very mixed feelings about it. Yes, it does have a lot of useful libraries such as MVC, basic ACL, Authorization, Input Validation & Filters, Views (with PHP as the templating language, which it essentially was designed for) and more. But ZF had a steep learning curve for me so far has been successfully used in about 5 big webapplications.

However, PHP’s inconsistent function naming, unpredictable order of expected arguments (needle and haystack anyone?), lack of closures/anonymous functions, namespacing, lexical scope and what not makes using PHP a true nightmare. Yes, PHP 5.3 finally supports closures and namespacing, but have you looked at the syntax for both? What a joke. On top of that, PEAR is even a bigger joke. Is someone using it at all?

Running into these issues day by day and knowing it can be easily solved in Perl is very, very depressing. It has totally taken away my desire to program, influencing my desire to program at all and has made my motivation disappear. I’ve truly been wondering if a programming job is really something I want.

The turnaround came when I had a talk with my boss about why we’d still use PHP for future projects. Yes, code reuse is one them but compared to Perl all the additional code I’ve written in PHP that is being reused already exists in Perl at CPAN. To my understanding the use of backslashes to define a namespace in PHP wasn’t really a design choice, but was forced upon because it was too hard to use the double colon or a dot, like many other languages do. On top of that, last time I checked, PHP6 development has been halted because they just can’t get Unicode to work.

Perl already supports Unicode, Perl doesn’t have a weird separation character for namespaces, Perl supports closures/anonymous subroutines, Perl supports lexical scoping, the core list of functions is small, easy to remember and the order of expected parameters is consistent. And it has CPAN.

I was able to convince my boss to start using Perl for future projects because of these given points for both Perl and PHP. What helped though was that he knew of Perl and we’ve also developed some applications in Perl already. One of them a Wx application, a newsletter mailer and some other small scripts. Difference now is that I can use it for webapplications as well.

Now that I’ve gotten the green light to go with Perl for future projects I’m much happier again at work. I’m more motivated to finish the current PHP projects so I can finally start doing some proper Perl work. Programming Perl makes me happy. Moving from PHP to Perl doesn’t mean we’ll be fully dropping PHP. That would be bad and ignorant. Existing stuff will still be supported, improved and extended with new features.

No matter how much you dislike one of the products you support, it’s part of the job. Having stuff you dislike is good actually, because it makes the fun stuff even more fun. For me, PHP makes Perl more fun :-).

Distributed jobs with Gearman

The first time I heard of Gearman was at Stack Overflow where a question was asked on how to stop workers nicely. On which an excellent joke was made by Cletus: “See now I was going to reply “Bitte halten Sie!” :-)”. Since then Gearman was stuck in my mind. So far I haven’t had the chance to use it for anything, but for my current project Maximus I need to be able to distribute jobs for several tasks such as uploading files, fetching from SCM repositories and so on. And Gearman is perfect for that kind of job.

The Gearman server was originally implemented in Perl but has now moved to C. I’m not sure if they’re still working on the Perl implementation, but the most recent release was in January this year.

In this post I’ll demonstrate how to setup a simple client and worker. The client sends tasks to the Gearman server and the worker registers itself with the server. When the server receives a task it checks if there’s a worker available and delegates the task to the worker. Once finished with the job the worker notifies the server and in turn the server notifies the client that the requested job is finished. It’s also possible not to wait for the task to be done. It depends on your specific situation and the job that needs to be executed if you want this or not.

Setup

First, install the required modules. Note that on Windows Danga::Socket isn’t stable so Gearman might occasionally fail a job. So far I haven’t had any issues with Linux. We’ll also be installing some additional modules that we’re going to use for the client and worker.

$ cpanm Gearman::Server
$ cpanm http://search.cpan.org/CPAN/authors/id/D/DO/DORMANDO/Gearman-1.11.tar.gz
$ cpanm WWW::Mechanize
$ cpanm Archive::Zip
$ cpanm JSON::Any

MyApp::Functions

Now lets make a module that contains some functionality that’ll be used by the worker. We’re making this modular so it’s easier to test these components. Both the client and the worker scripts should be just that, scripts. The functions in this module can fetch the thumbnail links from an Altavista image search page. The amount of pages to scan is limited to 20, but this can be adjusted. It also has a function to archive the downloads directory.

package MyApp::Functions;
use strict;
use warnings;
use Archive::Zip qw(:ERROR_CODES :CONSTANTS);
use Exporter 'import';
use File::Basename;
use File::Spec;
use LWP::Simple;
use WWW::Mechanize;

our @EXPORT_OK = qw(fetch_image_links download_images archive_downloads);

=head2 fetch_image_links

Fetch all image (thumbnail) links from the search results page
=cut
sub fetch_image_links {
	my($keyword, $limit) = @_;
	$keyword ||= 'perl';
	$limit ||= 20;

	my $mech = WWW::Mechanize->new;
	$mech->get('http://www.altavista.com/image/results?q=' . $keyword);

	my $count = 0;
	my @images;
	do {
		$count++;
		foreach my $image( $mech->images ) {
			next if $image->tag() ne 'img';
			next unless index($image->url(), 'nimage') > 0;
			push @images, $image->url();
		}
	}
	while( $count < $limit && $mech->follow_link( text_regex => qr/>>/ ) );

	return @images;
}

=head2 download_images

Download every supplied image from the list

The thumbnails from Altavista are in JPEG format. If you choose to use and or
modify this code for your own needs be sure to safe the file in the correct
format.
=cut
sub download_images {
	my @images = @_;

	mkdir('download') unless -d 'download';

	foreach(@images) {
		my $filepath = File::Spec->catfile('download', basename($_) . '.jpg');
		next if -f $filepath;

		my $content = get($_);
		open my $fh, '>', $filepath or die($!);
		binmode $fh;
		print $fh $content;
		close $fh;
	}
}

=head2 archive_downloads

Archive the download directory
=cut
sub archive_downloads {
	my $zip = Archive::Zip->new();
	my $name = 'backup-' . time();
	$zip->addTree('download' , $name);

	my $status = $zip->writeToFileNamed( $name . '.zip' );
	die "Archiving failed!" if $status != AZ_OK;
}

1;

worker.pl

Now that we’ve got our functionality in place it’s time to setup the worker. This worker provides 2 functions. fetch_thumbnails will do a search, collect the thumbnail links and will download them to the download directory. archive_downloads will create a Zip archive with the contents of the downloads directory.

#!/usr/bin/perl
use strict;
use warnings;
use lib './lib';
use Gearman::Worker;
use MyApp::Functions qw(fetch_image_links download_images archive_downloads);
use JSON::Any;

my $worker = Gearman::Worker->new;
$worker->job_servers('127.0.0.1');

# Using JSON to unserialize arguments
my $json = JSON::Any->new;

# fetch_thumbnails: Search and fetch thumbnails
$worker->register_function('fetch_thumbnails', sub {
	my @images = fetch_image_links( @{$json->decode($_[0]->arg)} );
	download_images( @images );
});

# archive_downloads: Archive download directory
$worker->register_function('archive_downloads', \&archive_downloads);

$worker->work while 1;

client.pl

A worker needs to have something to do, so lets create a client that can dispatch tasks to it. Our client will create a taskset, containing the fetch_thumbnails and archive_downloads tasks. This taskset is being sent to the server, which distributes it to the workers. When the workers are done the client gets a signal to continue. After this we’ll execute another task to create another archive. With do_task the client will wait for the task to finish. Finally, another archive_downloads task is being dispatched to the background. The client will instantly finish after this, even if the task isn’t finished yet. Still, the task is being executed by a worker when available.

#!/usr/bin/perl
use strict;
use warnings;
use lib './lib';
use Gearman::Client;
use JSON::Any;

my $client = Gearman::Client->new;
$client->job_servers('127.0.0.1');

# Using JSON to serialize arguments
my $json = JSON::Any->new;

# Create a taskset that will search and download thumbnails and finally archives
# it to a ZIP-file.
my $taskset = $client->new_task_set;

$taskset->add_task('fetch_thumbnails', $json->encode(['perl', 5]), {
	on_complete => sub {
		print "Downloaded all thumbnails\n";
	},
});

# Run the taskset and wait for it to complete
$taskset->wait;

# Create an archive and wait for it
$client->do_task('archive_downloads');

# And create a third archive, but don't for this one
$client->dispatch_background('archive_downloads');

Try it!

With all code in place open 3 terminals. 1 For gearmand, 1 for the worker and 1 for the script. Start them in the given order.

$ gearmand --debug=1
$ perl worker.pl
$ perl client.pl

The terminal running the client.pl should display the following:

Downloaded all thumbnails

Check the downloads directory to see all the thumbnails. The root directory of the scripts should contain 2 zip archives. If not then the jobs executed and finished the same second. Congratulations, you have a very fast machine! The name of the archive contains a timestamp. Change the code to use something more unique.

Please do note…

Do note that the example code won’t really work with multiple clients and workers. That’s because thumbnails are being stored inside the same directory and the archive_downloads task simply creates a snapshot of the download directory. I’ll leave it up to you as an exercise to figure this out. This post would simply become too long to cover this. This is merely an introduction to Gearman.

As a final note, I do realize we have Storable for serialization, but somehow it got borked on my Strawberry Perl installation.

Other

There’s also a Plack/PSGI script available to retrieve Gearman statistics. It’s available on GitHub. I haven’t tried it so I don’t know if it’s even compatible with the Perl server implementation. But I thought it was worth to mention it.

License

All code supplied here is licensed under the MIT license.

Entering the Iron Man Blogging Challenge

I’ve decided I should enter the Iron Man Blogging Challenge. Basically what you have to do is write 4 Perl related posts within 32 days, with a maximum interval of 10 days between every post. What’s in it for me? If I keep up with the frequency I get a cool badge to put on my blog. But mostly it’s a way for me to focus on Perl and expand my Perl knowledge. The ultimate goal for the Iron Man Blogging Challenge is to promote Perl, which is always a good thing.

The plan currently is to blog about all that’s hot in the Perl web development world; Plack/PSGI, Dancer, Mojolicious and what not. Aside from that I really need to learn about all other cool Perl features I’ve had to miss whilst programming PHP for over 3,5 years full time. Programming in PHP for that long is no fun at all and am glad I was able to convince my boss to use Perl for future projects, making me a happy programmer again.

Besides professionally I also do some programming in my free time. Am a fan of BlitzMax, a BASIC flavor with OOP support, for which I’m working on a Catalyst based website (Maximus) which is supposed to be a lightweight CPAN, but for BlitzMax. I’ve written various BlitzMax modules as well. Am also a CPAN author (CKRAS), maintaining a couple of modules. Of which my first, Mollie::Micropayment, showed horrible influence of PHP. It works, but it’s not pretty.

So, that should do it for my first post. Will try to write a blog post for the challenge every week now.