On Diversity in Tech Communities

One of the advantages of working from home is that I can do more interesting things with my lunch break that simply eat lunch. So, every week for three years I took my daughters to a local sing-a-long group. Now as you can imagine, this group was primarily made up of mothers and their children. The people who run the sessions are women. On the odd occasion another father would turn up, but it was mostly just me and thirty women and their children. No-one ever implied that I shouldn't be there. No-one ever made jokes about men being useless. People didn't try to have "Cosmo" type conversations with me that would make me blush. No-one even made any comment implying it was un-manly for me to sing along with nursery rhymes like all the other parents did. All in all you could say it was great. I'd never accuse anyone at any of these events of being sexist. But then again, every so often the group would sing a song about Bobby Shaftoe. For those of you not familar the lyrics go:

Bobby Shaftoe went to sea, Silver buckles on his knee. He'll come back and marry me, Pretty Bobby Shaftoe.

I never - in three years - spoke up about how uncomfortable these lyrics are for a straight man to sing. In the end I just stopped singing them. Did this really bother me that much? Not really, otherwise I would have said something. But it let me experience in the tiniest possible little way what it's like to be suddenly reminded that you're different to everyone else in the group and to find out that you can't join in what everyone else is doing because it's not designed for you. So, with this in mind I wish that people would understand that when I'm suggesting a code of conduct for a tech community my primary objective is not to suggest a list of things you can and can't do. Nor am I suggesting that people are deliberately being nasty. I'm just trying to encourage everyone to think a little wider about the other people in their community that aren't just like them - because even the best of us sometimes can have a blind spot. You know, it's not always about the big things. Sometimes I just don't want community member to have to sing songs about their desire to marry a sailorman. And if they do find themselves in a situation where someone is asking them to declare their desire for silver buckled knee wearers that they feel like they can politely point out that they shouldn't have to. That is all.

Share " On Diversity in Tech Communities"

Share on: FacebookTwitter

Oh Function Call, Oh Function Call, Wherefore Art Thou Function Call

Ever wondered where all places a function is called in Perl? Turns out that during my jetlagged ridden first day back at work, I wrote a module that can do just that called Devel::CompiledCalls.

Use of the module is easy. You just load it from the command line:

shell$ perl -c -MDevel::CompiledCalls=Data::Dumper::Dumper myscript.pl

And BAM, it prints out just what you wanted to know:

Data::Dumper::Dumper call at line 100 of MyModule.pm
Data::Dumper::Dumper call at line 4 of myscript.pl
myscript.pl syntax OK

While traditionally it’s been easy to write code using modules like Hook::LexWrap that prints out whenever a function is executed and at that point where that function is called from, what we really want is to print out at the time the call to the function is compiled by the Perl compiler. This is important because you might have a call to the function in some code that is only executed very infrequently (e.g. leap year handling) which would not be simply identified by hooking function execution at run time.

In the past developers have relied too much on tools like search and replace in our editors to locate the function calls. Given that Perl is hard to parse, and given that many of the calls might be squirreled away in installed modules that your editor doesn’t typically have open, this isn’t the best approach.

What Devel::CompiledCalls does is hook the compilation of the code (techically, we hook the CHECK phase of the code, but the effect is the same) with Zefram’s B::CallChecker. This allows the callback to fire as soon as the code is read in by Perl.

All in all, I’m pretty happy with the module and it’s a new tool in my bag of tricks for getting to grips with big, unwieldy codebases.

Share "Oh Function Call, Oh Function Call, Wherefore Art Thou Function Call"

Share on: FacebookTwitter

Coming To America

Big changes are afoot Not many people know this yet, but Erena, myself and the girls are in the process of emigrating. Yesterday I completed purchase of our new house in New Lebanon, New York, in the United States of America. Yes, this means that I'm going to be attending London.pm meetings even less than I did when I moved to Chippenham. One of the consequences of my move is a change of jobs. At the end of April I'll be leaving the wonderful Photobox and starting to do some work for the equally wonderful OmniTI. Photobox is a great place to work, with a world class Perl team. In the last four years I've worked for them I haven't complemented them enough. What other place can I work next to regular conference speakers? Core committers? People with more CPAN modules than I can shake a stick at? Perl Pumpkins themselves! To be blunt, if I was to stay in the UK I'd can't imagine I'd want to work anywhere else. It's been an interesting four years, seeing the company grow and grow and dealing with the scalaing problems - both in terms of the number of customers and the challenges of growing the team - and I've really enjoyed it. But new adventure beckons, and I needn't sing the praises of OmniTI to tell everyone here what great company they are and I'm going to be honoured to work with them. So, once I've completed the gauntlet of my Green Card application (which is scheduled to take many months yet) it'll be off with a hop, skip and a jump over the pond for a new life. Can't wait.

Share "Coming To America"

Share on: FacebookTwitter

Frac'ing your HTML

In my previous blog entry I talked about encoding weird characters into HTML entities. In this entry I’m going to talk about converting some patterns of ASCII - dumb ways of writing fractions - and turning them into HTML entities or Unicode characters.

Hey Good Looking, What’s Cooking?

Imagine a simple recipe:

<ul>
   <li>1/2 cup of sugar</li>
   <li>1/2 cup of spice</li>
   <li>1/4 cup of all things nice</li>
</ul>

While this is nice, we can do better. There’s nice Unicode characters for ¼, ½ and corresponding HTML entities that we can use to have the browser render them for us. What we need is some way to change all our mucky ASCII into these entities. Faced with this problem on his recipes site European Perl Hacker Léon Brocard wrote a module called HTML::Fraction that could tweak strings of HTML.

use HTML::Fraction;
my $frac = HTML::Fraction->new();
my $output = $frac->tweak($string_of_html);

This module creates output like:

<ul>
   <li>&frac12; cup of sugar</li>
   <li>&frac12; cup of spice</li>
   <li>&frac14; cup of all things nice</li>
</ul>

Which renders nicely as:

  • ½ cup of sugar
  • ½ cup of spice
  • ¼ cup of all things nice

HTML::Fraction can even cope with decimal representation in your string. For example:

  • 0.5 slugs
  • 0.67 snails
  • 0.14 puppy dogs tails

Processed with HTML::Fraction renders like so:

  • ¼ slugs
  • ⅔ snails
  • ⅐ puppy dogs tails

Unicode Characters Instead

Of course, we don’t always want to render out HTML. Sometimes we just want a plain old string back. Faced with this issue myself, I wrote a quick subclass called String::Fraction:

use String::Fraction;
my $frac = String::Fraction->new();
my $output = $frac->tweak($string);

The entire source code of this module is short enough that I can show you it here.

package String::Fraction;
use base qw(HTML::Fraction);

use strict;
use warnings;

our $VERSION = "0.30";

# Our superclass sometimes uses named entities
my %name2char = (
  '1/4'  => "\x{00BC}",
  '1/2'  => "\x{00BD}",
  '3/4'  => "\x{00BE}",
);

sub _name2char {
  my $self = shift;
  my $str = shift;

  # see if we can work from the Unicode character
  # from the entity returned by our superclass
  my $entity = $self->SUPER::_name2char($str);
  if ($entity =~ /\A &\#(\d+); \z/x) {
    return chr($1);
  }

  # superclass doesn't return a decimal entity?
  # use our own lookup table
  return $name2char{ $str }
}

We simply override one method _name2char so that instead of returning a HTML entity we return corresponding Unicode character.

Share "Frac'ing your HTML"

Share on: FacebookTwitter

Once is Enough

In this blog post I discuss how HTML entities work, how to encode them with Perl, and how to detect when you’ve accidentally double encoded your entities with my module Test::DoubleEncodedEntities.

How HTML Entities work

In HTML you can represent any character in simple ASCII by using entities. These come in two forms, either using the decimal codepoint of the character or, for some frequently used characters more readable human named entities

CharacterUnicode codepointDecimal EntityNamed Enitity
é233é&eacute;
©169©&copy;
9731none
<60<&lt;
&38&&amp;

So instead of writing

<!DOCTYPE html>
<html><body>© 2012 Mark Fowler</body></html>

You can write

<!DOCTYPE html>
<html><body>&copy; 2012 Mark Fowler</body></html>

By delivering a document in ASCII and using entities for any codepoints above 127 you can ensure that even the most broken of browsers will render the right characters.

Importantly, when an entity is converted back into a character by the browser the character no longer has any of its special meaning, so you can use encoding to escape sequences that would otherwise be considered markup. For example:

<!DOCTYPE html>
<html><body>say "yep"
  if $ready &amp;&amp; $bad &lt; $good;
</body></html>

Correctly renders as

say "yep" if $ready && $bad < $good;

Encoding Entities with Perl

The go-to module for encoding and decoding entities is HTML::Entities. Its use is simple: You pass the string you want to encode into the encode_entities function and it returns the same string with the entities encoded:

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
my $encoded = encode_entities($string);
say "<!DOCTYPE html>"
say "<html><body>$encoded</body></html>";

If you no longer need the non-encoded string you can have HTML::Entities modify the string you pass to it by not assigning the output to anything (HTML::Entities is smart enough to notice it’s being called in void context where its return value is not being used.)

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
encode_entities($string);
say "<!DOCTYPE html>"
say "<html><body>$string</body></html>";

The Double Encoding Problem

The trouble with encoding HTML entities is that if you do it a second time then you end up with nonsensical looking text. For example

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
encode_entities($string);
encode_entities($string);
say "<!DOCTYPE html>"
say "<html><body>$string</body></html>";

Outputs

<!DOCTYPE html>
<hmtl><body>&amp;copy; Mark Fowler 2012</body></html>

Which when rendered by the browser displays

&copy; Mark Fowler 2012

As the &amp; has turned into & but isn’t then combind with the copy; to turn it into the copyright symbol ©.

Each subsequent encoding turns the & at the start of the entity into &amp;, including those at the start of any previously created &amp;. Do this ten or so times and you end up with:

&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;copy; Mark Fowler 2012

The obvious solution is to make sure you encode the entities only once! But that’s not as easy as it might seem. If you’re building your output up from multiple processes it’s quite easy to mistakenly encode twice; Worse, if you’re using data that you don’t control (for example, extracted from a web page, downloaded from a feed, imported from a user) you might find that some or more of it had unexpectedly already been encoded.

Testing for the Problem

I recently re-released my module Test::DoubleEncodedEntities that can be used to write automated tests for double encoding.

use Test::More tests => 1;
use Test::DoubleEncodedEntities;
ok_dee($string, "check for double encoded entities");

It works heuristically by looking for strings that could possibly be double encoded entities. Obviously there’s lots of HTML documents out there where it’s perfectly legitimate to have double encoded entities: any of them talking about entity encoding, such as this blog post itself, will naturally do do. However, the vast majority - where you control the input - will not have these format of strings and we can test for them.

For example:

use Test::More tests => 6;
use Test::DoubleEncodedEntities;

ok_dee("&copy; Mark Fowler 2012",     "should pass");
ok_dee("&amp;copy; Mark Fowler 2012", "should fail");
ok_dee("&copy; Mark Fowler 2012", "should fail");
ok_dee("© Mark Fowler 2012",     "should pass");
ok_dee("&amp;#169; Mark Fowler 2012", "should fail");
ok_dee("&#169; Mark Fowler 2012", "should fail");

Produces the output:

1..6
ok 1 - should pass
not ok 2 - should fail
#   Failed test 'should fail'
#   at test.pl line 5.
# Found 1 "&amp;copy;"
not ok 3 - should fail
#   Failed test 'should fail'
#   at test.pl line 6.
# Found 1 "&copy;"
ok 4 - should pass
not ok 5 - should fail
#   Failed test 'should fail'
#   at test.pl line 8.
# Found 1 "&amp;#169;"
not ok 6 - should fail
#   Failed test 'should fail'
#   at test.pl line 9.
# Found 1 "&#169;"
# Looks like you failed 4 tests of 6.

Correctly detecting the double encoded entities in the should fail tests

Share "Once is Enough"

Share on: FacebookTwitter

Simple Todo List Processing in Perl

While I normally use OmniFocus as a GTD tool to manage my todo lists, sometimes I want to collaborate on a todo list with someone else and I don’t want them to have to use a complicated and expensive tool. I’ve often found in this situation a simple shared text file is the way to go. A file on the office fileserver, or in a shared Dropbox folder, in a obvious format that any can understand with a glance.

Here’s what one looks like:

[X] (extract) Complete coding.
[X] (extract) Get Zefram to code review my code.  
[X] (yapc) Write talk submission for YAPC::NA
[X] (yapc) submit talk proposal with ACT
[ ] (extract) Write Array::Extract::Example document.  
[ ] (extract) Check in and push to github
[ ] (extract) Upload to CPAN. 
[ ] (extract) Publish a blog post about it

In the above example two tasks each from the the extract project and the yapc project have been marked as completed. Periodically I want to move these “done” items to a separate archive list - the done file - so they don’t clutter up my list. That’s something I’m going to want to automate with Perl.

The way I’ve chosen to write that script is to use Tie::File, where each element of the array corresponds to a line of the file.

Alternatives to Grepping

At first glance removing all the lines from our array that are ticked might seem like a simple use of grep:

tie my @todo, "Tie::File", $filename or die $!;
@todo = grep { !/\A\[[^ ]]/ } @todo;

But that’s throwing away everything that we want to move to the done file. An alternative might be to write a grep with side effects:

tie my @todo, "Tie::File", $filename or die $!;
open my $done_fh, ">;>;", $done_filename or die $!;
@todo = grep {
  !/\A\[[^ ]]/ || do {
    say { $done_fh } $_;
  0 } 
} @todo;

But that’s ugly. The code gets much uglier still if you want a banner preceding the first new entry into the done file saying when the actions were moved there.

What I ended up doing was writing a new module called Array::Extract which exports a function extract that does exactly what you might expect:

my @done = extract { /\A\[[^ ]]/ } @todo;

@todo is modified to remove anything that the block returns true for and those elements are placed in @done.

open my $done_fh, ">;>;", $done_filename or die $!;
my @done = extract { /\A\[[^ ]]/ } @todo;
print { $done_fh } map { "$_\n" }
  "","Items archived @{[ DateTime->;now ]}:","",@done;

Needs more actions

Of course, if all I wanted to do was remove the actions that had been completed I probably wouldn’t have reached for Tie::File, but for my next trick I’m going to need to insert some extra content at the top of the file once I’m done processing it.

I want to keep track of projects that have had all their remaining actions marked as done and moved to the done file. For example, I’ve ticked off all the action in the yapc so I need more actions (write slides, book flight, etc, etc.) I need a list of these “actionless” projects at the top of my todo list so when I glance at it I know there’s some tasks missing.

Essentially after I run my script I want my todo file to look something like this:

yapc needs more actions

[ ] (extract) Write Array::Extract::Example document
[ ] (extract) Check in and push to github
[ ] (extract) Upload to CPAN
[ ] (extract) Publish a blog post about it

Here’s the final script that handles that case too:

#!/usr/bin/env perl.  

use 5.012;
use warnings;

use Path::Class;
use Tie::File;
use Array::Extract qw(extract);
use List::MoreUtils qw(uniq last_index);
use DateTime;

########################################################################

my $TODO = file($ENV{HOME}, "Dropbox", "SharedFolder", "TODO.txt");
my $DONE = $TODO->;dir->;file("DONE.txt");

########################################################################

# work out what projects are in this array, maintaining order
sub projects(@) {
  return uniq grep { defined $_ } map { /^\[.] \(([^)]+)/; $1 } @_;
}

########################################################################

# tie to the todo list file
tie my @todo, "Tie::File", $TODO->;stringify or die $!;

# work out what projects are in the file before we remove anything
my @projects = projects @todo;

# remove those items that are done.  
my @done = extract { /\A\[[^ ]]/x } @todo;
exit unless @done;

# append what has been done to another file
print { $DONE->;open(">;>;") or die $! } map { "$_\n" }
  "",
  "Items archived @{[ DateTime->;now ]}:",
  "",
  @done;

# work out which projects no longer exist
my %remaining_project = map { $_ =>; 1 } projects @todo;
@projects = grep { !$remaining_project{ $_ } } @projects;

# insert this at the section at the top of the file. 
splice @todo,0,0,map { "$_ needs more actions" } @projects;

# seperate the "needs more actions" out with a newline
my $break = last_index { /needs more actions\z/ } @todo;
splice @todo, $break+1, 0, "" if defined $break && $todo[$break+1] ne "";

Share "Simple Todo List Processing in Perl"

Share on: FacebookTwitter

Once a week, every week

This year my new year’s resolution for 2012 will be to release a Perl distribution to the CPAN each and every week. And I think you, as a Perl developer, should do this too. Why am I doing this? Because I’m trying to force myself into more iterative development. Note that I didn’t say a new distribution. Just a new release - of either an existing or new distribution - a week. The simple fact of the matter is that false hubris is causing me to not be releasing as often as I should and consequently I’m seeing lots of problems.
  • Sometimes I’m tempted to do too much in one release. I’ve got lots of modules that could do with updating, but because I think they need massive amounts of work I don’t ever have the time to make all the changes. I’d be better off just improving them slightly each release and releasing them more often.
  • Sometimes I’m being a perfectionist with my first release. I’ve got a bunch of modules that I’ve got 90% done but because there’s theoretically a few more nice to have features I haven’t written yet, I’ve not shipped them. I should Release early release often. What extra features these modules need will soon become apparent once it has more real world users than just me, and hey, in the open source world someone else might write them for me.
  • Sometimes I don’t value my code enough and I don’t think the “simple” thing I spent a day or so coding would be useful for anyone else or it’s beneath me to release something so simple to the CPAN. This of course is nonsense - a day I can save someone else coding is a day they’ve saved, no matter how obvious or simple the code.
This all can pretty much be solved by forcing myself to release more often. So, here’s the challenge:

The Rules

Short version: Upload each week, every week. Longer version:
  • Every week, as defined as the midnight between Saturday night / Sunday morning UTC to the midnight between the following Saturday night / Sunday morning UTC, I must release a new distribution to the CPAN. (Note that this gives me no extra or less allowance during daylight savings or time zone changes.)
  • For the purpose of disambiguation timings will be counted by PAUSE upload dates
  • Should an official PAUSE outage occur and I can, hand on my heart, claim that that stopped me uploading, I will give myself a grace period of forty eight hours after either the end of the outage or the end of the previous week (whichever is longer) to complete the upload. In this situation this upload will count for the previous week and an upload will still have to be made for the week that upload took place in.
  • Scoring for the year will be done by Seinfeld chain length, that is to say by counting the largest run of uninterrupted weeks, with ties decided by total number of weeks with uploads.

You Can Play Too

Of course, it would be great for the Perl world if every CPAN developer took up this challenge. More importantly, it’d be great for me because it’d give me someone to compete against and to make sure that I keep this self-set challenge up. So how about it? Fancy playing?

Share "Once a week, every week"

Share on: FacebookTwitter

As part of my Pimp Your Mac With Perl Talk at the London Perl Workshop I talked very briefly about one of my newer Perl modules that I haven’t blogged about before: Mac::Safari::JavaScript. This module allows you to execute JavaScript inside your browser from within a Perl program and it just work.

The Power of Just

Piers Cawley gave a lightning talk at YAPC::Europe::2001 about just in which he explained the power of interfaces that just do what you need whenever you need them to and hide a bunch of complexity behind themselves.

To run JavaScript in Safari you just call safari_js with the JavaScript you want to execute:

use Mac::Safari::JavaScript qw(safari_js);
safari_js('alert("Hello World");');

This JavaScript is then executed in the current tab of the frontmost Safari window. If you want to return a datascructure to Perl from your JavaScript then you just do so:

my $page_title = safari_js("return document.title;");

No matter how complex[1]: it is:

my $details = safari_js(<<'ENDOFJAVASCRIPT');
  return {
    title: document.title,
    who: jQuery('#assignedto').next().text(),
  };
ENDOFJAVASCRIPT

If you want to have variables avalible to you in your JavaScript then you just pass them in:

my $sum = safari_js("return a+b;",{ a => 1, b => 2 });

No matter how complex[2] they are:

use Config;
safari_js("alert(config['version']);",{config => \%Config})

And if you throw an exception in JavaScript, then you just catch it the normal way:

use Try::Tiny;
try {
  safari_js("throw 'bang';");
} catch {
  print "Exception '$_' thrown from within JavaScript";
};

Peaking Under the Hood

So, how, does this all hang together? Well, it turns out that running JavaScript in your browser from AppleScript isn’t that hard:

tell application "Safari"
  do JavaScript "alert('hello world');" in document 1
end tell

And running AppleScript from within Perl isn’t that hard either:

use Mac::AppleScript qw(RunAppleScript);
RunAppleScript($applescript);

So it’s a simple problem, right? Just nest the two inside each other. Er, no, it’s not just that simple.

It turns out that handling the edge cases is actually quite hard. Typical problems that come up that Mac::Safari::JavaScript just deals with you for are:

  • How do we encode data structures that we pass to and from JavaScript?
  • How do we escape the strings we pass to and from AppleScript?
  • For that matter how do we encode the program itself so it doesn’t break?
  • What happens if the user supplies invalid JavaScript?
  • How do we get exceptions to propogate properly?
  • With all this thunking between layers, how do we get the line numbers to match up in our exceptions?

And that’s the power of a module that handles the just for you. Rather than writing a few lines of code to get the job done, you can now just write one and have that one line handle all the weird edge cases for you.


  1. Okay, so there are some limits. Anything that can be represented by JSON can be returned, but anything that can’t, can’t. This means you can’t return circular data structures and you can’t return DOM elements. But that would be crazy; Just don’t do that.  ↩

  2. Likewise we can only pass into JavaScript something that can be represented as JSON. No circular data strucutres. No weird Perl not-really-data things such as objects, filehandles, code refences, etc.  ↩

Share ""

Share on: FacebookTwitter

Test::DatabaseRow 2

Last week I released an update for one of my older modules, Test::DatabaseRow. In the course of this update I completely rewrote the guts of the module, turning the bloated procedural module into a set of clearly defined Moose-like Perl classes.

Why update now?

Test::DatabaseRow had been failing it’s tests since Perl 5.13.3 (it was another victim of the changing stringification of regular expressions breaking tests.) We’re currently planning to upgrade our systems at work from 5.12 to 5.14 in the new year, and (embarrassing) one of the key modules that breaks our 5.14 smoke is Test::DatabaseRow. Oooops.

Since I had my editor open, I decided it might be a good idea to switch to a more modern build system. And, since I was doing that, I thought it might be a good idea to fix one of my long standing todos (testing all rows returned from the database not just the first.)

In other words, once I’d started, I found it hard to stop, and before I knew it I had a reasonably big task on my hands.

The decision to refactor

When I first wrote Test::DatabaseRow back in 2003, like most testing modules of the time, it sported a simple Exporter based interface. The (mostly correct) wisdom was that simple procedural interfaces make it quicker to write tests. I still think that’s true, but:

  • Procedual programming ends up either with very long functions or excessive argument passing. The single function interface made the internals of Test::DatbaseRow quite difficult – to avoid having one giant function call I ended up passing all the arguments a multitude of helper functions and then passing the multiple return values of one function onto the next.

  • Many of the calls you write want to share the same defaults For example, the database handle to use, if we should be verbose in our output, should we do utf–8 conversion?… These are handled reasonably well with package level variables as defaults for arguments not passed to the function (which isn’t such a big deal in a short test script) but the code to support those within the testing class itself isn’t particularly clean having to cope with defaults evaluation in multiple places.

  • Only being able to return once from the function is a problem. Sometimes you might want to get extra data back after the test has completed. For example, when I wanted to allow you to optionally return the data extracted from the database I had to do it by allowing you to pass in the args to row_ok references to variables to be populated as it executes. Again, while this isn’t the end of the world from an external interface point of view, the effect it has on the internals (passing data up and down the stack) is horrible.

For the sake of the internals I wanted things to change. However: I didn’t want to break the API. I decided to split the module into two halves. An simple external facing module that would provide the procedural interface, and an internal object orientated module that would allow me to produce a cleaner implementation.

No Moose, but something similar

As I came to create Test::DatabaseRow::Object I found myself really wanting to write this new class with Moose. Now, Moose is a very heavyweight dependency; You don’t want to have to introduce a dependency on hundreds of modules just because you want to use a simple module to test your code. In fact, Test::DatabaseRow has no non-core dependencies apart from DBI itself, and I wanted to keep it this way with the refactoring. So, no Moose. No Mouse. No Moo. Just me and an editor.

In the end I compromised by deciding to code the module in a Moose “pull accessor” style even if I didn’t have Moose itself to provide the syntax to do this succinctly.

The approach I took was to put most of the logic for Test::DatabaseRow::Object – anything that potentially changes state of the object – into lazy loading read only accessors. Doing this allowed me to write my methods in a declarative style, relying entirely on the accessors performing the calculation to populate themselves the first time they’re accessed. For example. Test::DatabaseRow::Object has a read only accessor called db_results which goes to the database the first time it’s accessed and executes the SQL that’s needed to populate it (and the SQL itself comes from sql_and_bind which, unless explicitly stated in the constructor, is populated on first use from the where and table accessors and so on.)

Since I wasn’t using Moose this produced a lot more code than we’d normally expect to see, but because I was following a standard Moose conventions it’s still fairly easy to see what’s going on (I even went as far to leave a Moose style has accessor declaration in a comment above the blocks of code I had to write to sufficiently convey what I was doing.)

A results object

The second big architectural change I made was to stop talking directly to Test::Builder. Instead I changed to returning a results object which was capable of rendering itself out to Test::Builder on demand.

This change made the internals a lot easier to deal with. I was able to break the test up into several functions each returning a success or failure object. As soon I was able to detect failure in any of these functions I could return it to Test::DatabaseRow, but if I got a success - which now hadn’t been rendered out to Test::Builder yet - I could throw it away and move onto the next potentially failing test while I still had other things to check.

This made implementing my missing feature, the ability to report on all rows returned from the database not just the first one, much easier to implement.

Problems, problems, problems

After all this work, and spending hours improving the test coverage of the module, I still botched the release of 2.00. The old module tested the interface with DBI by checking against a database that was on my old laptop in 2003. Since I no longer had that laptop these tests weren’t being run (I actually deleted them since they were useless) and hence I didn’t notice when I’d broken the interface to DBI in my refactoring.

Ilmari pointed out I was being stupid a few minutes after I’d uploaded. Within ten minutes I’d written some DBI tests that test with SQLite (if you have DBD::SQLite installed) and released 2.01.

The biggest surprise was the next day where our overnight Hudson powered smokes failed at work, and the only thing that had changed was Test::DatabaseRow (we re-build our dependencies from the latest from CPAN every night, and it’d automatically picked up the changes.) Swapping in the old version passed. Putting the new version in failed. Had I missed something else in my testing?

No, I hadn’t.

After several hours of head scratching I eventually worked out that there was an extra bug in Test::DatabaseRow 1.04 that I’d not even realised was there, and I’d fixed it with 2.01. The tests were failing in our smokes but not because I’d broken the testing infrastructure, but because I’d fixed it and now I was detecting an actual problem in my Hudson test suite that had previously been unexposed.

What’s next?

Oh, I’m already planning Test::DatabaseRow 2.02. On github I’ve already closed a feature request that chromatic asked for in 2005. Want another feature? File a bug in RT. I’ll get to it sooner or later…

Share "Test::DatabaseRow 2"

Share on: FacebookTwitter

Dear Member of Parliment

As a Perl programmer, both my livelihood and a large chunk of my social life relies entirely on the internet. How would you react if the head of your government made public statements talking about restricting people internet access to people that they (and their agencies) "know" are doing wrong things...
...we are working with the Police, the intelligence services and industry to look at whether it would be right to stop people communicating via these websites and services when we know they are plotting violence, disorder and criminality.
David Cameron, UK Prime Minister
In response, I wrote to my MP. I encourage those of you from the UK to do the same.
Dear Duncan Hames, I write to you today to express my concerns regarding statements made by the prime minister with respect to restricting access to "social media". It should be fairly obvious when the chinese regime are praising our censorship plans that they are ill thought through and should be scrapped. However obvious I feel that I must still enumerate the ways that this plan is wrong on many levels. Firstly, your prime minister seems unable to distinguish between the medium and the message. As we move more and more into the digital age more and more communication will take new forms, these new forms will replace more traditional forms of communication in society. To seek to control over some forms of commutation is modern equivalent of the government seeking to control the ability for its citizens to write to newspapers or talk in the street. Secondly, the idea of the government silencing it's citizens from communicating with one another is chilling. While I can understand that some speech may be criminal by it's content, woe befall any government who tries to pre-emptively stop such speech, as these very same controls can be used, and abused, to control its citizens. Thirdly, the prime minister is seeking to put restrictions on people that have not been convicted of a crime (he said, I quote, "when we know they are plotting violence, disorder and criminality", but that is a matter for the courts not the "[the government,] the Police, the intelligence services and industry" to decide.) What safeguards are being proposed that I, a law abiding citizen, may not also be restricted from communication? Fourthly, and ironically, your prime minister is suggesting restricting the primary ability for communication with wider society by those individuals who he claims live outside of our society. Finally, I do not understand your prime minister's desire to push for further attention grabbing legislation when our police force can already wield the RIP Act to gather evidence from these new forms of communication. While I may not agree with the RIP Act, let our police forces use these powers to full effect before granting them new ones. As a member of your constituency I ask you to ensure that your prime minister is questioned about such blatant flaws in his proposals in parliment. Thanking you in advance for your help in this matter Yours sincerely, Mark Fowler
Those wanting to do more could do a lot worse than set up a regular donation to the Open Rights Group.

Share "Dear Member of Parliment"

Share on: FacebookTwitter

blog built using the cayman-theme by Jason Long. LICENSE