A little AppConfig tip

June 20, 2014

I have a little tip for AppConfig, one apparently not everyone knows, as I discovered in a household discussion.

The wonderfully useful module AppConfig reads ini files

bar = 1
baz = 2

derp = 4

and makes them accessible in a configuration object like so:

$cfg->foo_bar; # return 1

Now, if you want parameters not in any section, put those at the top of your config file. For $cfg->hork:

hork = 1
bar = 1

If you put it lower down you’ll end up with $cfg->foo_hork or $cfg->herp_hork.

That is all. Carry on.


June 18, 2014

My first CPAN module, Acme::Buffalo::Buffalo. There’s always room on CPAN for one more silly module.

Years ago I wanted to write Acme::Log4perl::Terror which would peg your application’s log level to the color-coded terror alert issued by the US Department of Homeland Security. Then they stopped issuing them. Oh well. Life is short, write those Acme modules now.

Pale Moon rising

June 15, 2014

At zork*, I am still automating Firefox to perform front-end performance analyses, though barely thanks to instability in the latest versions of Firefox.  With 28 and 29 I experience a lot of frustrating Firefox crashes.  Our front-end developer also reports crashes with Firefox 28 and its recommended version of  Firebug, 1.12.  He reverted to 24 in order to get some work done.

On my Facebook feed, I read that a friend frustrated to madness by 29 had switched to a browser called Pale Moon.  It bills itself as “an Open Source, Firefox-based web browser” and is currently based on 24, the same version that relieved my colleague’s frustration. So I gave it a try.

Pale Moon supports all my extensions: MozRepl, Firebug, PageSpeed, DOMInspector, Yahoo and NetExport. I can generate profiles programmatically with a few small modifications to the script (the profiles.ini is located in a different directory, for example, and palemoon -v doesn’t print ‘Mozilla Firefox’ obviously). MozRepl and its Perl interfaces work like a charm.

I haven’t measured it, but startup time is noticeably faster, and it’s much more stable.  Since it’s being used in an automated fashion I am less concerned about UI and more about stability and speed, so these observations make me happy.

If you work in a shop that frowns on alternative browsers,  or you wish to abstain from Firefox’s rapid upgrades, you may also consider Firefox ESR.


*work typed on an azerty keyboard. I switch keyboards a lot and the ensuing confusion erodes my grip on sanity.

Newest French Perl Mongers board member introduces herself

June 15, 2014

At the General Assemby of the French Perl Mongers (les Mongueurs de Perl) the board put out a call for new candidates. Sébastien Apherghis-Tramoni urged me to throw my hat in the ring. I asked the board if I would be doing something stupid if I did so. No, answered Laurent Boivin, but you can perhaps prevent someone else from doing something stupid. Oh what the hell, I figured, and put my  hand up. I was duly voted in before I could reconsider. 

As it is unseemly (peu convenable) for a newly-minted board member to have a dead blog, and because Wendy G.A. van Dijk urged us all heartily to Publish! Publish! Publish!,  I am reviving PerlGerl.  I’ve been doing a few things lately too, and should really post about them.



Controlling Firefox from Perl with MozRepl

June 24, 2012

On Friday I wrote my first program using AnyEvent and Coro, and it was so nifty that I decided to revive my moribund blog to describe it. This little program solved a whole raft of problems.

The problem involved determining certain properties of each of a list of URLs and updating a database. The rub is that some properties, such as redirects, are most reliably observed in the browser, and some, such whether the URL is hosted at $WORK, best determined using $WORK’s Perl modules. Now how can we put these two together to get all the info we need?

The approach I hit on involves using Perl to drive an instance of Firefox configured with some helpful extensions. MozRepl is a cool extension that lets you telnet into Firefox and program it from the inside, with access to the entire browser and the Mozilla API. NetExport is an extension to the extension Firebug which generates HAR (HTTP archive) files capturing all the data necessary for the analysis of front-end performance. HAR files serve as input to the command-line versions of PageSpeed and YSlow.

NetExport exports its results either to file or via HTTP POST. So in my program I start an embedded HTTP server, fire up Firefox, telnet into it, load a URL, and let the httpd capture and process the JSON posted by NetExport. Think of the setup as making Firefox puke into a bucket set out for that purpose. A bit imagé but you get the idea.

How do AnyEvent and Coro enter the picture? I use AnyEvent condition variables to coordinate the work between page loading with MozRepl and output handing with the web server, and a Coro thread to tell the web server to kindly stand over there while I continue with my main line of work. The hardest part is reorienting one’s brain towards the asynchronous way of thinking, not so easy after a lifetime of vanilla scripting.

So let’s see some code. The main script is delightfully short and sweet; the annotated version follows. The name of my employer has been changed to protect the innocent. And of course I use strict and warnings.

use AnyEvent;
use Coro;
use WORK::Config;
use WORK::AnyEvent::HTTPD; 
use WORK::AnyEvent::HTTPD::Handler::NetExport;  
use WORK::AnyEvent::MozRepl;
use Log::Log4perl qw(:easy);

my $cfg = WORK::Config::get_config();

# AnyEvent condition variable 

my $cv  = \$WORK::AnyEvent::HTTPD::Handler::NetExport::cv;

my @urls = qw(http://www.google.com http://www.yahoo.com);

# Tell NetExport where to post its results.
my $beacon = "http://localhost:9090/netexport";

# Handlers to process POSTed results.

my $h = WORK::AnyEvent::HTTPD::Handler::NetExport->new;

# Fire up a server and ask it to step out of the way.
async { 

# Fire up FF with good ol' system().


# Connect to MozRepl with AnyEvent::Socket


# Set the POST URL and turn on auto export.


while (@urls) {

    $$cv = AnyEvent->condvar; 

    my $url = shift @urls;

    load_page( $url ); #  Using MozRepl

    $$cv->recv;   # The POST handler will send().

kill_firefox();  # From within MozRepl.

Neat huh? There’s lots of blanks to fill in, but as this post is already getting a bit long, I will do that in posts to follow.

Getting Moose and Rose::DB::Object to play together

April 9, 2011

I’m fairly new to both Moose and Rose::DB::Object. and have been poking around trying to find a simple way to marry the two. Delegation and roles do the trick here.

I created a parameterized role, so that I could tell the role what Rose::DB::Object-derived class to delegate to.

package My::DB::Role;
use MooseX::Role::Parameterized;

parameter 'table' => (
is => 'ro',
isa => 'Str',

role {
my $p = shift;

my $class =
$p->table =~ /^My::DB/
? $p->table
: join '::', 'My::DB', $p->table

eval "require $class";

has 'db_obj' => (
is => 'rw',
isa => $class,
handles => [ $class->meta->column_names, qw(save load delete) ],
default => sub {

no Moose;

In my consuming class, I specify the role and the shortened name of the Rose::DB::Object-derived class:

package My::Product;
use Moose;

with 'Prixing::DB::Role' => { table => 'Product'};

With this little setup, I have working code:

use Test::More;


my $product = My::Product->new;


note $product->created_at;


This outputs:

ok 1 – use My::Product;
# 2011-04-09T09:03:20

This is not thoroughly tested; I just did this tonight. But this looks like the way to go.

Further reading:

  • Kate Yoak posts a comment about her solution at Rohan Almeida’s blog (now listed as an attack site by Google, which I find odd). Link to very fierce attack site; you have been warned.
  • Inheritance from Rose::DB::Object has been possible since Moose 1.15, which introduced the -meta_name to use Moose. Renaming Moose’s meta neatly sidesteps the Rose::DB::Object method of the same name.

UTF8 follies

April 8, 2011

Character encoding issues are shortening my life expectancy. It’s been a few years since I dealt with them regularly, so they bite me from time to time.

Recently I puzzled over a set of strings that were showing up double-encoded in my Postgres database. These strings were encoded from incoming Latin 1 to UTF8, travelled around the program with their utf8 flags set, went through JSON->decode and encode multiple times without apparent harm, but once in the database, they had the telltale double-encoding gibberish characters. E.g.:

use Encode;
my $utf8_string = "Télévision extrême à domicile";
print encode('utf8', $utf8_string);
# becomes Télévision extrême à domicile

I spent an absurdly long time capturing strings and dumping during execution:

say qx{echo $str | od -c }

od being a handy Unix utility I became very familiar with in my days of converting bibliographic data.

Turns out I was focusing on the wrong leads. These strings became values in a hash which was run through JSON->encode before saving to db. What prompted the re-encoding was not the values of the hash, but the keys. These were string literals in my program which were not marked as UTF8. Perl looked at my keys, looked at my values, saw a mixed bag, and decided to run the whole thing through the shredder again, just to be on the safe side.

The solution was simple:
use utf8

Because my keys were source-code string literals, I want my source code to be UTF8. use utf8 assures that.


Get every new post delivered to your Inbox.