Once is Enough

In this blog post I discuss how HTML entities work, how to encode them with Perl, and how to detect when you’ve accidentally double encoded your entities with my module Test::DoubleEncodedEntities.

How HTML Entities work

In HTML you can represent any character in simple ASCII by using entities. These come in two forms, either using the decimal codepoint of the character or, for some frequently used characters more readable human named entities

Character Unicode codepoint Decimal Entity Named Enitity
é 233 é é
© 169 © ©
9731 none
< 60 < &lt;
& 38 & &amp;

So instead of writing

<!DOCTYPE html>
<html><body>© 2012 Mark Fowler</body></html>

You can write

<!DOCTYPE html>
<html><body>&copy; 2012 Mark Fowler</body></html>

By delivering a document in ASCII and using entities for any codepoints above 127 you can ensure that even the most broken of browsers will render the right characters.

Importantly, when an entity is converted back into a character by the browser the character no longer has any of its special meaning, so you can use encoding to escape sequences that would otherwise be considered markup. For example:

<!DOCTYPE html>
<html><body>say "yep"
  if $ready &amp;&amp; $bad &lt; $good;

Correctly renders as

say "yep" if $ready && $bad < $good;

Encoding Entities with Perl

The go-to module for encoding and decoding entities is HTML::Entities. Its use is simple: You pass the string you want to encode into the encode_entities function and it returns the same string with the entities encoded:

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
my $encoded = encode_entities($string);
say "<!DOCTYPE html>"
say "<html><body>$encoded</body></html>";

If you no longer need the non-encoded string you can have HTML::Entities modify the string you pass to it by not assigning the output to anything (HTML::Entities is smart enough to notice it’s being called in void context where its return value is not being used.)

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
say "<!DOCTYPE html>"
say "<html><body>$string</body></html>";

The Double Encoding Problem

The trouble with encoding HTML entities is that if you do it a second time then you end up with nonsensical looking text. For example

use HTML::Entites qw(encode_entities);

my $string = "\x{a9} Mark Fowler 2012";
say "<!DOCTYPE html>"
say "<html><body>$string</body></html>";


<!DOCTYPE html>
<hmtl><body>&amp;copy; Mark Fowler 2012</body></html>

Which when rendered by the browser displays

&copy; Mark Fowler 2012

As the &amp; has turned into & but isn’t then combind with the copy; to turn it into the copyright symbol ©.

Each subsequent encoding turns the & at the start of the entity into &amp;, including those at the start of any previously created &amp;. Do this ten or so times and you end up with:

&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;copy; Mark Fowler 2012

The obvious solution is to make sure you encode the entities only once! But that’s not as easy as it might seem. If you’re building your output up from multiple processes it’s quite easy to mistakenly encode twice; Worse, if you’re using data that you don’t control (for example, extracted from a web page, downloaded from a feed, imported from a user) you might find that some or more of it had unexpectedly already been encoded.

Testing for the Problem

I recently re-released my module Test::DoubleEncodedEntities that can be used to write automated tests for double encoding.

use Test::More tests => 1;
use Test::DoubleEncodedEntities;
ok_dee($string, "check for double encoded entities");

It works heuristically by looking for strings that could possibly be double encoded entities. Obviously there’s lots of HTML documents out there where it’s perfectly legitimate to have double encoded entities: any of them talking about entity encoding, such as this blog post itself, will naturally do do. However, the vast majority – where you control the input – will not have these format of strings and we can test for them.

For example:

use Test::More tests => 6;
use Test::DoubleEncodedEntities;

ok_dee("&copy; Mark Fowler 2012",     "should pass");
ok_dee("&amp;copy; Mark Fowler 2012", "should fail");
ok_dee("&copy; Mark Fowler 2012", "should fail");
ok_dee("© Mark Fowler 2012",     "should pass");
ok_dee("&amp;#169; Mark Fowler 2012", "should fail");
ok_dee("&#169; Mark Fowler 2012", "should fail");

Produces the output:

ok 1 - should pass
not ok 2 - should fail
#   Failed test 'should fail'
#   at test.pl line 5.
# Found 1 "&amp;copy;"
not ok 3 - should fail
#   Failed test 'should fail'
#   at test.pl line 6.
# Found 1 "&copy;"
ok 4 - should pass
not ok 5 - should fail
#   Failed test 'should fail'
#   at test.pl line 8.
# Found 1 "&amp;#169;"
not ok 6 - should fail
#   Failed test 'should fail'
#   at test.pl line 9.
# Found 1 "&#169;"
# Looks like you failed 4 tests of 6.

Correctly detecting the double encoded entities in the should fail tests

Posted in Uncategorized

Permalink 7 Comments

Simple Todo List Processing in Perl

While I normally use OmniFocus as a GTD tool to manage my todo lists, sometimes I want to collaborate on a todo list with someone else and I don’t want them to have to use a complicated and expensive tool. I’ve often found in this situation a simple shared text file is the way to go. A file on the office fileserver, or in a shared Dropbox folder, in a obvious format that any can understand with a glance.

Here’s what one looks like:

[X] (extract) Complete coding.
[X] (extract) Get Zefram to code review my code.  
[X] (yapc) Write talk submission for YAPC::NA
[X] (yapc) submit talk proposal with ACT
[ ] (extract) Write Array::Extract::Example document.  
[ ] (extract) Check in and push to github
[ ] (extract) Upload to CPAN. 
[ ] (extract) Publish a blog post about it

In the above example two tasks each from the the extract project and the yapc project have been marked as completed. Periodically I want to move these “done” items to a separate archive list – the done file – so they don’t clutter up my list. That’s something I’m going to want to automate with Perl.

The way I’ve chosen to write that script is to use Tie::File, where each element of the array corresponds to a line of the file.

Alternatives to Grepping

At first glance removing all the lines from our array that are ticked might seem like a simple use of grep:

tie my @todo, "Tie::File", $filename or die $!;
@todo = grep { !/\A\[[^ ]]/ } @todo;

But that’s throwing away everything that we want to move to the done file. An alternative might be to write a grep with side effects:

tie my @todo, "Tie::File", $filename or die $!;
open my $done_fh, ">;>;", $done_filename or die $!;
@todo = grep {
  !/\A\[[^ ]]/ || do {
    say { $done_fh } $_;
  0 } 
} @todo;

But that’s ugly. The code gets much uglier still if you want a banner preceding the first new entry into the done file saying when the actions were moved there.

What I ended up doing was writing a new module called Array::Extract which exports a function extract that does exactly what you might expect:

my @done = extract { /\A\[[^ ]]/ } @todo;

@todo is modified to remove anything that the block returns true for and those elements are placed in @done.

open my $done_fh, ">;>;", $done_filename or die $!;
my @done = extract { /\A\[[^ ]]/ } @todo;
print { $done_fh } map { "$_\n" }
  "","Items archived @{[ DateTime->;now ]}:","",@done;

Needs more actions

Of course, if all I wanted to do was remove the actions that had been completed I probably wouldn’t have reached for Tie::File, but for my next trick I’m going to need to insert some extra content at the top of the file once I’m done processing it.

I want to keep track of projects that have had all their remaining actions marked as done and moved to the done file. For example, I’ve ticked off all the action in the yapc so I need more actions (write slides, book flight, etc, etc.) I need a list of these “actionless” projects at the top of my todo list so when I glance at it I know there’s some tasks missing.

Essentially after I run my script I want my todo file to look something like this:

yapc needs more actions

[ ] (extract) Write Array::Extract::Example document
[ ] (extract) Check in and push to github
[ ] (extract) Upload to CPAN
[ ] (extract) Publish a blog post about it

Here’s the final script that handles that case too:

#!/usr/bin/env perl.  

use 5.012;
use warnings;

use Path::Class;
use Tie::File;
use Array::Extract qw(extract);
use List::MoreUtils qw(uniq last_index);
use DateTime;


my $TODO = file($ENV{HOME}, "Dropbox", "SharedFolder", "TODO.txt");
my $DONE = $TODO->;dir->;file("DONE.txt");


# work out what projects are in this array, maintaining order
sub projects(@) {
  return uniq grep { defined $_ } map { /^\[.] \(([^)]+)/; $1 } @_;


# tie to the todo list file
tie my @todo, "Tie::File", $TODO->;stringify or die $!;

# work out what projects are in the file before we remove anything
my @projects = projects @todo;

# remove those items that are done.  
my @done = extract { /\A\[[^ ]]/x } @todo;
exit unless @done;

# append what has been done to another file
print { $DONE->;open(">;>;") or die $! } map { "$_\n" }
  "Items archived @{[ DateTime->;now ]}:",

# work out which projects no longer exist
my %remaining_project = map { $_ =>; 1 } projects @todo;
@projects = grep { !$remaining_project{ $_ } } @projects;

# insert this at the section at the top of the file. 
splice @todo,0,0,map { "$_ needs more actions" } @projects;

# seperate the "needs more actions" out with a newline
my $break = last_index { /needs more actions\z/ } @todo;
splice @todo, $break+1, 0, "" if defined $break && $todo[$break+1] ne "";

Posted in Uncategorized

Permalink 3 Comments

Once a week, every week

This year my new year’s resolution for 2012 will be to release a Perl distribution to the CPAN each and every week. And I think you, as a Perl developer, should do this too.

Why am I doing this? Because I’m trying to force myself into more iterative development. Note that I didn’t say a new distribution. Just a new release – of either an existing or new distribution – a week.

The simple fact of the matter is that false hubris is causing me to not be releasing as often as I should and consequently I’m seeing lots of problems.

  • Sometimes I’m tempted to do too much in one release. I’ve got lots of modules that could do with updating, but because I think they need massive amounts of work I don’t ever have the time to make all the changes. I’d be better off just improving them slightly each release and releasing them more often.
  • Sometimes I’m being a perfectionist with my first release. I’ve got a bunch of modules that I’ve got 90% done but because there’s theoretically a few more nice to have features I haven’t written yet, I’ve not shipped them. I should Release early release often. What extra features these modules need will soon become apparent once it has more real world users than just me, and hey, in the open source world someone else might write them for me.
  • Sometimes I don’t value my code enough and I don’t think the “simple” thing I spent a day or so coding would be useful for anyone else or it’s beneath me to release something so simple to the CPAN. This of course is nonsense – a day I can save someone else coding is a day they’ve saved, no matter how obvious or simple the code.

This all can pretty much be solved by forcing myself to release more often. So, here’s the challenge:

The Rules

Short version: Upload each week, every week.

Longer version:

  • Every week, as defined as the midnight between Saturday night / Sunday morning UTC to the midnight between the following Saturday night / Sunday morning UTC, I must release a new distribution to the CPAN. (Note that this gives me no extra or less allowance during daylight savings or time zone changes.)
  • For the purpose of disambiguation timings will be counted by PAUSE upload dates
  • Should an official PAUSE outage occur and I can, hand on my heart, claim that that stopped me uploading, I will give myself a grace period of forty eight hours after either the end of the outage or the end of the previous week (whichever is longer) to complete the upload. In this situation this upload will count for the previous week and an upload will still have to be made for the week that upload took place in.
  • Scoring for the year will be done by Seinfeld chain length, that is to say by counting the largest run of uninterrupted weeks, with ties decided by total number of weeks with uploads.

You Can Play Too

Of course, it would be great for the Perl world if every CPAN developer took up this challenge. More importantly, it’d be great for me because it’d give me someone to compete against and to make sure that I keep this self-set challenge up. So how about it? Fancy playing?

Posted in Uncategorized

Permalink 8 Comments

As part of my Pimp Your Mac With Perl Talk at the London Perl Workshop I talked very briefly about one of my newer Perl modules that I haven’t blogged about before: Mac::Safari::JavaScript. This module allows you to execute JavaScript inside your browser from within a Perl program and it just work.

The Power of Just

Piers Cawley gave a lightning talk at YAPC::Europe::2001 about just in which he explained the power of interfaces that just do what you need whenever you need them to and hide a bunch of complexity behind themselves.

To run JavaScript in Safari you just call safari_js with the JavaScript you want to execute:

use Mac::Safari::JavaScript qw(safari_js);
safari_js('alert("Hello World");');

This JavaScript is then executed in the current tab of the frontmost Safari window. If you want to return a datascructure to Perl from your JavaScript then you just do so:

my $page_title = safari_js("return document.title;");

No matter how complex[1]: it is:

my $details = safari_js(<<'ENDOFJAVASCRIPT');
  return {
    title: document.title,
    who: jQuery('#assignedto').next().text(),

If you want to have variables avalible to you in your JavaScript then you just pass them in:

my $sum = safari_js("return a+b;",{ a => 1, b => 2 });

No matter how complex[2] they are:

use Config;
safari_js("alert(config['version']);",{config => \%Config})

And if you throw an exception in JavaScript, then you just catch it the normal way:

use Try::Tiny;
try {
  safari_js("throw 'bang';");
} catch {
  print "Exception '$_' thrown from within JavaScript";

Peaking Under the Hood

So, how, does this all hang together? Well, it turns out that running JavaScript in your browser from AppleScript isn’t that hard:

tell application "Safari"
  do JavaScript "alert('hello world');" in document 1
end tell

And running AppleScript from within Perl isn’t that hard either:

use Mac::AppleScript qw(RunAppleScript);

So it’s a simple problem, right? Just nest the two inside each other. Er, no, it’s not just that simple.

It turns out that handling the edge cases is actually quite hard. Typical problems that come up that Mac::Safari::JavaScript just deals with you for are:

  • How do we encode data structures that we pass to and from JavaScript?
  • How do we escape the strings we pass to and from AppleScript?
  • For that matter how do we encode the program itself so it doesn’t break?
  • What happens if the user supplies invalid JavaScript?
  • How do we get exceptions to propogate properly?
  • With all this thunking between layers, how do we get the line numbers to match up in our exceptions?

And that’s the power of a module that handles the just for you. Rather than writing a few lines of code to get the job done, you can now just write one and have that one line handle all the weird edge cases for you.

  1. Okay, so there are some limits. Anything that can be represented by JSON can be returned, but anything that can’t, can’t. This means you can’t return circular data structures and you can’t return DOM elements. But that would be crazy; Just don’t do that.  ↩

  2. Likewise we can only pass into JavaScript something that can be represented as JSON. No circular data strucutres. No weird Perl not-really-data things such as objects, filehandles, code refences, etc.  ↩

Posted in Uncategorized

Permalink Leave a comment

Test::DatabaseRow 2

Last week I released an update for one of my older modules, Test::DatabaseRow. In the course of this update I completely rewrote the guts of the module, turning the bloated procedural module into a set of clearly defined Moose-like Perl classes.

Why update now?

Test::DatabaseRow had been failing it’s tests since Perl 5.13.3 (it was another victim of the changing stringification of regular expressions breaking tests.) We’re currently planning to upgrade our systems at work from 5.12 to 5.14 in the new year, and (embarrassing) one of the key modules that breaks our 5.14 smoke is Test::DatabaseRow. Oooops.

Since I had my editor open, I decided it might be a good idea to switch to a more modern build system. And, since I was doing that, I thought it might be a good idea to fix one of my long standing todos (testing all rows returned from the database not just the first.)

In other words, once I’d started, I found it hard to stop, and before I knew it I had a reasonably big task on my hands.

The decision to refactor

When I first wrote Test::DatabaseRow back in 2003, like most testing modules of the time, it sported a simple Exporter based interface. The (mostly correct) wisdom was that simple procedural interfaces make it quicker to write tests. I still think that’s true, but:

  • Procedual programming ends up either with very long functions or excessive argument passing. The single function interface made the internals of Test::DatbaseRow quite difficult – to avoid having one giant function call I ended up passing all the arguments a multitude of helper functions and then passing the multiple return values of one function onto the next.

  • Many of the calls you write want to share the same defaults For example, the database handle to use, if we should be verbose in our output, should we do utf–8 conversion?… These are handled reasonably well with package level variables as defaults for arguments not passed to the function (which isn’t such a big deal in a short test script) but the code to support those within the testing class itself isn’t particularly clean having to cope with defaults evaluation in multiple places.

  • Only being able to return once from the function is a problem. Sometimes you might want to get extra data back after the test has completed. For example, when I wanted to allow you to optionally return the data extracted from the database I had to do it by allowing you to pass in the args to row_ok references to variables to be populated as it executes. Again, while this isn’t the end of the world from an external interface point of view, the effect it has on the internals (passing data up and down the stack) is horrible.

For the sake of the internals I wanted things to change. However: I didn’t want to break the API. I decided to split the module into two halves. An simple external facing module that would provide the procedural interface, and an internal object orientated module that would allow me to produce a cleaner implementation.

No Moose, but something similar

As I came to create Test::DatabaseRow::Object I found myself really wanting to write this new class with Moose. Now, Moose is a very heavyweight dependency; You don’t want to have to introduce a dependency on hundreds of modules just because you want to use a simple module to test your code. In fact, Test::DatabaseRow has no non-core dependencies apart from DBI itself, and I wanted to keep it this way with the refactoring. So, no Moose. No Mouse. No Moo. Just me and an editor.

In the end I compromised by deciding to code the module in a Moose “pull accessor” style even if I didn’t have Moose itself to provide the syntax to do this succinctly.

The approach I took was to put most of the logic for Test::DatabaseRow::Object – anything that potentially changes state of the object – into lazy loading read only accessors. Doing this allowed me to write my methods in a declarative style, relying entirely on the accessors performing the calculation to populate themselves the first time they’re accessed. For example. Test::DatabaseRow::Object has a read only accessor called db_results which goes to the database the first time it’s accessed and executes the SQL that’s needed to populate it (and the SQL itself comes from sql_and_bind which, unless explicitly stated in the constructor, is populated on first use from the where and table accessors and so on.)

Since I wasn’t using Moose this produced a lot more code than we’d normally expect to see, but because I was following a standard Moose conventions it’s still fairly easy to see what’s going on (I even went as far to leave a Moose style has accessor declaration in a comment above the blocks of code I had to write to sufficiently convey what I was doing.)

A results object

The second big architectural change I made was to stop talking directly to Test::Builder. Instead I changed to returning a results object which was capable of rendering itself out to Test::Builder on demand.

This change made the internals a lot easier to deal with. I was able to break the test up into several functions each returning a success or failure object. As soon I was able to detect failure in any of these functions I could return it to Test::DatabaseRow, but if I got a success – which now hadn’t been rendered out to Test::Builder yet – I could throw it away and move onto the next potentially failing test while I still had other things to check.

This made implementing my missing feature, the ability to report on all rows returned from the database not just the first one, much easier to implement.

Problems, problems, problems

After all this work, and spending hours improving the test coverage of the module, I still botched the release of 2.00. The old module tested the interface with DBI by checking against a database that was on my old laptop in 2003. Since I no longer had that laptop these tests weren’t being run (I actually deleted them since they were useless) and hence I didn’t notice when I’d broken the interface to DBI in my refactoring.

Ilmari pointed out I was being stupid a few minutes after I’d uploaded. Within ten minutes I’d written some DBI tests that test with SQLite (if you have DBD::SQLite installed) and released 2.01.

The biggest surprise was the next day where our overnight Hudson powered smokes failed at work, and the only thing that had changed was Test::DatabaseRow (we re-build our dependencies from the latest from CPAN every night, and it’d automatically picked up the changes.) Swapping in the old version passed. Putting the new version in failed. Had I missed something else in my testing?

No, I hadn’t.

After several hours of head scratching I eventually worked out that there was an extra bug in Test::DatabaseRow 1.04 that I’d not even realised was there, and I’d fixed it with 2.01. The tests were failing in our smokes but not because I’d broken the testing infrastructure, but because I’d fixed it and now I was detecting an actual problem in my Hudson test suite that had previously been unexposed.

What’s next?

Oh, I’m already planning Test::DatabaseRow 2.02. On github I’ve already closed a feature request that chromatic asked for in 2005. Want another feature? File a bug in RT. I’ll get to it sooner or later…

Posted in Uncategorized

Permalink Leave a comment

Dear Member of Parliment

As a Perl programmer, both my livelihood and a large chunk of my social life relies entirely on the internet. How would you react if the head of your government made public statements talking about restricting people internet access to people that they (and their agencies) “know” are doing wrong things…

…we are working with the Police, the intelligence services and industry to look at whether it would be right to stop people communicating via these websites and services when we know they are plotting violence, disorder and criminality.

David Cameron, UK Prime Minister

In response, I wrote to my MP. I encourage those of you from the UK to do the same.

Dear Duncan Hames,

I write to you today to express my concerns regarding statements made by the prime minister with respect to restricting access to “social media”.

It should be fairly obvious when the chinese regime are praising our censorship plans that they are ill thought through and should be scrapped.

However obvious I feel that I must still enumerate the ways that this plan is wrong on many levels.

Firstly, your prime minister seems unable to distinguish between the medium and the message. As we move more and more into the digital age more and more communication will take new forms, these new forms will replace more traditional forms of communication in society. To seek to control over some forms of commutation is modern equivalent of the
government seeking to control the ability for its citizens to write to newspapers or talk in the street.

Secondly, the idea of the government silencing it’s citizens from communicating with one another is chilling. While I can understand that some speech may be criminal by it’s content, woe befall any government who tries to pre-emptively stop such speech, as these very same controls can be used, and abused, to control its citizens.

Thirdly, the prime minister is seeking to put restrictions on people that have not been convicted of a crime (he said, I quote, “when we know they are plotting violence, disorder and criminality”, but that is a matter for the courts not the “[the government,] the Police, the intelligence services and industry” to decide.) What safeguards are
being proposed that I, a law abiding citizen, may not also be restricted from communication?

Fourthly, and ironically, your prime minister is suggesting restricting the primary ability for communication with wider society by those individuals who he claims live outside of our society.

Finally, I do not understand your prime minister’s desire to push for further attention grabbing legislation when our police force can already wield the RIP Act to gather evidence from these new forms of communication. While I may not agree with the RIP Act, let our police forces use these powers to full effect before granting them new ones.

As a member of your constituency I ask you to ensure that your prime minister is questioned about such blatant flaws in his proposals in parliment.

Thanking you in advance for your help in this matter

Yours sincerely,

Mark Fowler

Those wanting to do more could do a lot worse than set up a regular donation to the Open Rights Group.

Posted in Uncategorized

Permalink Leave a comment

London Calling

Now that I don’t live in London anymore (I live in Chippenham, which is eighty two miles away) I don’t often get to go to the London Perl Monger Socials, but last night with the meeting happening right by Paddington Station it was too good an chance to miss.

The hot topic of conversation was obviously the impending YAPC::Europe conference. I sadly won’t be attending, since I just got back from my trip to YAPC::NA (which I owe a blog post on,) but I was able to give good advise on what talks to go see having already seen the US versions. There seemed to be a significant amount of problems with clashes in the schedule in Latvia that I can sympathise with. For example, I was recommending Jesse’s talk on 5.16 (which I really enjoyed at YAPC::NA,) but it was pointed out that he’s up against Smylers who I think is also an entertaining and informative speaker.

Jesse’s talk at YAPC::NA on 5.16 generated quite a bit of conversation around the tables. Taking a straw poll of the people present I think that they liked the direction that’s being proposed and those that could would be attending the talk in Latvia to hear more in person. People in general liked the idea of making the language (optionally) more consistent, easier to parse and more consistent without losing the ability to run older more sloppy code. Jesse might have been shocked that in Ashville people clapped rather than booed his suggestion that the indirect object syntax not be allowed under “use 5.16” but at work we enforce “no indirect;” on all our code anyway. The idea of laying the ground work for possibly re-implementing perl 5 (not Perl, but “perl”, the interpreter) by making cleaner syntax was one thing that Jesse said in his talk that people at the social thought was interesting. Sam Villian pointed out that Git seems to have been re-implemented multiple times and this has been a big advantage for it.

Nicholas Clarke arrived hot and in need of beer after running for the train, being delayed after writing grant proposals. This kicked off a discussion about the TPF core maintenance grant which morphed into a discussion about the availability of talent to work on Perl 5 core issues (We had both Nicholas and Zefram sitting round the table – that’s not a bad chunk of the talent pool in itself.) In short my opinion is that the more work that’s done on the Perl core the more interest we’ll attract, and that’s a good thing.

Problems with hiring in general were discussed; I pointed out that at YAPC::NA lots of companies were hiring and offering telecommute positions so they could get the talent they needed. The outragerous costs charged by not very effective recruiters were mentioned and the real need for high quality technically savvy recruiters (or at least, recruiters with technical experts) was identified as a gap in the market.

For some reason at some point we got into a big discussion about unicode. Ilmari showed us his awesome library card with “Mannsåker” written as “MannsAyker”. “Mannsåker” had obviously gone through some terrible UTF-8 bytecode into Latin-1 conversion resulting in “MannsÃ¥ker” and then someone seems to have re-typed that into it’s ASCII equivalent. It’s not like his donor card was much better either! This morphed in a discussion about failed attempts to get domain name registrars to adopt proper unicode characters (and the various security issues related around that.) I wonder if the IT industry will be dealing with this in twenty years time? Probably.

As is fitting for any modern IT meetup these days we talked a bit about the problems of scale. This progressed into discussion of the problems of disaster recovery preparation; It’s very hard to test without impacting customers (it’s easier if you’ve got completely redundant systems and you’re willing to invest into DR with a zero downtime switchover but that’s rare) and it’s actually quite hard to get a grip on what you have and haven’t got covered (systems change rapidly and delaying rollouts to make sure full DR cover is in place may result in a large lost opportunity cost.)

Of course, London.pm still (in addition to all the Perl and computing talk) ricochets between geek trivia and the usual trappings of good friends. “Why don’t we talk about Buffy and more?”, “Well, what about Ponies?”, “Hey, All the cool kids on the internet like My Little Pony these days”. “Speaking of kids, is your daughter crawling yet?” “She’s sitting up and waving”, “Oh, while I remember, here’s the bib your youngest left at our house last week”

As always, I had fun, and I look forward to attending again another time soon.

Posted in Uncategorized

Permalink Leave a comment


Get every new post delivered to your Inbox.