Perl 6

I see the official release of the Perl 6 language specification happened on Christmas day.

The first piece of commercial web development I did was in Perl 5. A lot of people can probably say the same thing. This one was a content-management system led by James Elson in 1999 at PSWeb Ltd, a small agency in Farringdon that renamed itself to Citria and expanded rapidly during 1999-2001 before deflating even more rapidly when the dotcom bust arrived.

My recollection was that this particular CMS only ever had one customer, the BBC, who used it only for their very small Digital Radio site. But I still have a copy of the code and on inspection it turns out to have some comments that must have been added during a later project, so perhaps it did get deployed elsewhere. It was a neat, unambitious system (that’s a good thing, James is a tasteful guy) that presented a dynamic inline-editing blocks-based admin interface on a backend URL while generating static pages at the front end.

I remember there was an open question, for a time, about whether the company should pursue a product strategy and make this first CMS, or something like it, the basis of its business, or else take up a project strategy and use whatever technology from whichever provider seemed most appropriate for each client. The latter approach won out. It’s interesting to speculate about the other option.

(I like to imagine that the release of Perl 6 is sparking tiresome reminiscences like this from ageing programmers across the world.)

Perl 6 looks like an interesting language. (It’s a different language from Perl 5, not compatible with it.) The great strength of Perl was of course its text-processing capacity, and for all the fun/cute/radically-unmaintainable syntax showcased on the Perl 6 site, it’s clear that that’s still a big focus. Perl 6 appears to be the first language to attempt to do the right thing with Unicode from the ground up: that is, to represent text as Unicode graphemes, rather than byte strings (like C/C++), awkward UTF-16 sequences (Java), or ISO-10646 codepoint sequences (Python 3). This could, in principle, mean your ad-hoc botched-together text processing script in Perl 6 has a better chance of working reliably across multilingual inputs than it would in any other language. Since plenty of ad-hoc botched-together text processing still goes on in companies around the world, that has the feel of a good thing.

Alte Schönhauser Straße

In 1992, while I was an undergraduate at Bath university, I went to Berlin for an industrial placement year. I had started out registered for a 3-year maths degree without a placement, but there was a scheme you could apply to if you changed your mind and fancied going abroad in the middle of it. Hardly anyone applied for it, and I don’t think any applicants from my course were rejected unless they failed their end-of-year exams.

I was to work at the Konrad-Zuse-Zentrum (ZIB), a computing research institution that at the time was based in Charlottenburg, a tidy part of west-central Berlin. My girlfriend, also at Bath but studying languages and responsible for encouraging me to apply to go abroad in the first place, had managed to arrange a study year at the Freie Universität. She moved in to a student residence and I into a modern one-person flat organised by the company, in Mariendorf, in south Berlin.

Mariendorf wasn’t very exciting and my girlfriend’s hall of residence wasn’t all that great, so after a few months we decided to find a new flat together. My spoken German was just functional, hers was good, and between us we spent a lot of time talking to Mitwohnzentrale agents and looking around flats until, in February 1993, we rented a flat in Alte Schönhauser Straße, near the centre of what had been East Berlin.

Central (eastern) Berlin, 1993 (Falkplan)

Central (eastern) Berlin, 1993 (Falkplan).

Central (eastern) Berlin, probably 1975 (RV Stadtplan).

Central (eastern) Berlin, probably 1975 (RV Stadtplan). This undated map includes the U-Bahn stations that opened in 1973, but shows the 1976 and 1978 openings as “in planning”.

Alte Schönhauser Straße is close to the S-Bahn station at Hackescher Markt, an area officially known as the Spandauer Vorstadt lying just north of Alexanderplatz. This district, and Hackescher Markt especially, is now super-shiny and is known for bars, shops, and lots of tourists. But its present state is a result of a thorough and rapid redevelopment and restoration starting in about 1997. (See p27 onwards of this urban-planning slideshow for an interesting overview of these works.)

Berlin. Corner of Neue Schönhauser Straße and Rosenthaler Straße, 1993

Neue Schönhauser Straße / Rosenthaler Straße corner, 1993

In 1993 the area was still a bit of a mess. Many buildings on major streets like Rosenthaler Straße and Neue Schönhauser Straße were pocked with bullet holes or had obvious bits missing. What is now the fancy paved square filled with outdoor cafés, outside the S-Bahn station, was just a road. One of the commercial streets now present (An der Spandauer Brücke) was then a scrubby park, I assume because the buildings had been pulled down after the war and were not rebuilt until the area was restored. The now-restored Hackescher Hof complex was an unremarkable grey residential building: I had no idea there was supposed to be anything interesting about it. It could all feel a bit bleak.

Berlin. Rosenthaler Straße?, 1993

Street scene, 1993 – but I’m not sure exactly which street. If you know, I’d love to hear.

But this was a thrilling area to live in as a transient foreigner at the time. Many of the most decaying buildings were temporarily housing intimidating and/or tantalising makeshift bars, cafés, or art venues, like autonomous growths forcing themselves up through the gaps in what had once been reliable, gentlemanly turn-of-the-20th-century buildings. From Hackescher Markt up Oranienburger Straße to the looming art-project squat of Tacheles the area was full of a sort of place I had never seen before. It all seemed sudden, urgent, about to collapse. I knew very little about any of it and I can hardly even characterise myself as a participant, but I loved being able to be there.

Berlin. Rear view of Tacheles, 1993

Tacheles from Friedrichstraße, 1993.

Many of these venues turned out to be longer-lived than I would have expected — Silberstein, on Oranienburger Straße, and Tacheles itself both stuck around until 2013, gradually increasing their plate-glass quotient as they went. (The back of Tacheles, which ended up glazed over, was initially a void you could fall out of.)

* * *

The flat had four rooms, much bigger than the modern one I’d become used to, and it was I think smartly decorated and desirable for its time and place, but it did have certain limitations. Each room had a coal-fired heating stove, which worked pretty well, but of course you had to bring coal up in a bucket from the cellar and light a fire in each stove some time before you needed the heat. These stoves were commonplace at the time, so the air around smelled of coal smoke in a way that western cities hadn’t, I imagine, for some decades by then. There was no bathroom; the loo was outside the flat, shared with the flat next door. Hot water was through a small electric heater above the kitchen sink that could provide about a kettle’s worth. The owners had installed a standalone electric shower unit in the utility room, which was very natty but a bit ineffectual: switch it on, wait ten minutes for it to heat up, get in, enjoy two minutes of warm water.

Still, it was a lovely flat, and I loved the coal tang of the cold night air and the late-night sound of the Rosenthaler Straße tram, its swooping creak as it slowed for a tram stop, carried to the window across the wasteland at the back of the building.

* * *

As it happened, when the day came to move all our belongings to the flat, I was alone: my girlfriend was temporarily in the UK with her family. We didn’t have all that much stuff, so I thought I could just load up a backpack and take it on the U-Bahn. I took three or four loads from my flat and three or four from her rooms, and by the time everything had been moved, it was late and I was very tired. I dumped everything and fell into bed.

February in Berlin can be very cold. I woke up early in the morning and found it difficult to move my limbs. I could move my head, but when I did, a crushing pain rolled through it. I’ve never known a headache to compare.

I realised that I couldn’t move because I was too cold, and that I needed to do something about it. But I’d forgotten to bring up any coal before I fell asleep the previous night. To warm up, I would need to pull on some clothes, find a bucket, make my way out of the flat and down to the unfamiliar cellar, fill up with coal, drag the bucket back up, set up and light a fire, and hang on long enough for the stove to warm up. It was a painfully long journey for such a simple job.

* * *

I’ve been back a handful of times since 1993, though sadly never for long enough to get beyond the initial phase of just boggling at things that have changed. The really obvious changes are of course along the stretch of the Wall, like the whole new Potsdamer Platz, but it’s the details like shops and venues, or changes to transport layout, that are the most interesting for me.

Berlin. Kaufhof, Alexanderplatz, 1993

Kaufhof (former Centrum department store), Alexanderplatz, 1993. At the time the food store here was still a decent place for a normal weekly shop. It’s much more upmarket now.

Some of this has to do with the time-telescoping effect of getting older, but it also has to do with not being there. If I’d been working there while all this was happening, I would probably be unable to remember what it had ever been like before, just as I can hardly remember what London’s Docklands area was like before its second big wave of building at the turn of the millennium.

Proprietary Unix

From 1992 to 1998, every paid job I did came with a Unix workstation on my desk. Admittedly that only covers three employers, but it covers a lot of different kinds of workstation.

In those days, selling Unix software (unless you could dictate the hardware as well) involved a lot of porting, and companies would build up a library of different workstations to port and test on. A bit like Android development nowadays, but much more expensive.

At some point I used, or had in the rooms around me, machines running

  • Silicon Graphics IRIX on MIPS processors (the SGI Indigo and Indy—the natty coloured boxes)
  • Sun Solaris on SPARC (with my favourite keyboards, the Sun Type 5)
  • SunOS 4 on Motorola 68K (immense single-bit-depth mono screens)
  • DEC Ultrix on MIPS, and OSF/1 on Alpha (everyone wanted the Alpha)
  • SCO Unix on Intel x86 (nobody wanted that)
  • Hewlett-Packard HP-UX on HP Precision Architecture (nice hardware, didn’t enjoy the OS)
  • Data General DG/UX on AViiON (not a very likeable machine)
  • IBM AIX on POWER architecture (fast, though I was never into the rattly keyboards)
  • and a System V implementation from Godrej & Boyce of India running on Intel i860

That was up to 1998.

From 1999 to 2014, every paid job I’ve done—other than excursions into Windows for specific bits of work—has come with an Intel PC running Linux on my desk.

I suppose proprietary Unix workstations made something of a comeback in the shape of Apple’s Mac Pro line with OS/X. I think of the dustbin-shaped Mac Pro as a successor to SGI workstations like the Indy and O2: the sort of thing you would want to have on your desk, even if it wasn’t strictly what you needed to get the job done.


I found my old Russian SLR camera a few days ago.

It’s a Zenit EM Olympic edition, a tie-in from the 1980 Moscow Olympics. The Russian Zenit, and more so its East German cousin the Praktica, were popular manual SLR cameras for beginner photographers in the UK in the 80s and 90s. I got mine second-hand for perhaps 20 quid in the early 90s. It’s big, very heavy, and clumsy to operate, and I was never a very good photographer—I doubt if I ever got more than two or three acceptable photos from it. Of course I decided it must be an awful camera.

Nowadays I use an Olympus E-PL3 Micro Four Thirds system camera. I have enough residual interest in the mechanics of photography to enjoy using a “proper” camera rather than a good smartphone and this is a light, efficient model that has worked well for me.

When I found the old Zenit, though, I thought—hey, can I use this lens with my new camera? Was it really as awful as I thought, or was it just me?

heliosIt turns out to be quite easy to do. The lens is a Helios 44m, a very common Russian make with a slightly antique fitting, the M42 thread. A local camera shop had an adapter.

The lens weighs more than the camera body: almost as much as a full jar of marmalade. And it’s almost entirely manual.

Manual focus, uh oh

When I bought the adapter, the guy in the shop insisted I would get no help at all from the camera: manual focus, manual aperture, manual shutter, and no metering. That turned out not to be true—focus and aperture are manual, but the camera can still handle metering and shutter speed.

And it turns out that it was just me: the Helios is quite a good lens.

Swan, Round Pond

Manual focus is… tricky… and I’m not very good at it, but manual focus and aperture are a lot more fun when you have instant replay and an automatic shutter. A heavy lens like this isn’t too bad to hold, either: you just hold the camera by the lens.

What does feel a bit more specialised is the new “equivalent” focal length. The lens has a 58mm focal length, which is unchanged of course, but the Micro Four-Thirds sensor is half the size of the 35mm negative giving an effective equivalent of 116mm focal length on a 35mm camera: pretty zoomy. Not the sort of thing you can just wander around taking scenes with, though it’s a good focal length for portraits, architectural detail, and animals.

(For comparison, it’s about the same frame as the well-regarded Olympus 60mm macro lens. Here: I took the same photo with the Helios and the Olympus lens.)

Squirrel in Hyde Park

The Helios is known for a distinctive circular light pattern in the out-of-focus backgrounds, which is appealing, if not what you’d always want.

Put things together

I’ve really enjoyed using this lens, but that doesn’t have a great deal to do with its optical qualities. It’s a decent lens, but I already own a better one of a similar focal length. (Though if I’d found my old camera and tried out the lens earlier, I might not have bought the comparatively expensive Olympus 60mm.)

But I do enjoy the history and (literal) weight of this lens, and I enjoy having a manual focus ring and being required to use it.

I don’t think I would ever—even now—set one of my autofocus lenses to manual focus, even though they all have focus rings, because I know I get better photos out the other end with autofocus. I’m just not good enough at it. But I’m delighted that I found the old camera and did something with it.

And it’s exciting to be able to make your camera out of all these different bits.

To be able to take a component built to a standard devised in 1949 and stick it on a very contemporary camera—I feel this is revealing, not so much of the future-proofing of the original standard or the backward compatibility of the new one, as of the fact that cameras are still mostly optical instruments and glass optics have been made to much the same, wonderfully high, standards for many decades now.

Functional programming and the joy of learning something again

Twenty years ago, as a maths-and-computing undergraduate at the university of Bath, I was introduced to functional programming using the ML language by the excellent Julian Padget. We undergrads were set the traditional assignment of writing a sed-like text processor in ML, and found it first baffling and then, if we were lucky, rather exciting. I got all enthusiastic about functional languages and, like any proper enthusiast, spent a delightful pointless while trying to design a toy language of my own.

Then I went off and got a job as a C++ programmer.

C++ is a practical, useful language, but it’s also complex, verbose, and baroque, with many different schools of practice. I’ve done a fair bit of work in other languages as well (equally boring ones, like Java) but effectively, I’ve spent much of the past two decades simply continuing to learn C++. That’s a bit crazy.

So when messing about with Android recently, I decided I wanted to try to get some of that old sense of joy back. I went looking for a language with the following properties:

  • It should be primarily functional rather than object-oriented
  • It should be strongly-typed, ideally with Hindley-Milner typing (the most exciting thing about ML, for the undergraduate me)
  • It should have a compiler to JVM bytecode, so I could use it directly in Android apps, and should be able to use Java library code
  • It should have a REPL, an interactive evaluation prompt for productive messing around
  • It should be nice to read—it should be obviously a language I wanted to learn, and I was going to be happily guided by personal whim
  • It must be simple enough for the old, stupid me to have a chance of getting to grips with it
  • And, while I wasn’t going to care very much how mainstream it was, it did need to be reasonably complete and reliable.

There are lots of languages out there for the JVM, including quite a few functional ones. Scala and Clojure are the best-known.

Scala (here in Wikipedia) is a multi-paradigm language that, for me, has shades of C++ in that it feels like it’s designed to improve on all sorts of things in Java rather than be something simple of its own. It also looks object-oriented first and functional second; doesn’t prioritise interactive evaluation; and although it has a sophisticated type system, it doesn’t do inference on function parameter types. All in all, it just seemed a bit chunky to me.

Clojure (here in Wikipedia) looks more fun. It has a very vibrant community and seems well-loved. It’s basically a Lisp for the JVM, and I’ve written Lisp before. That’s definitely interactive and functional. But I wasn’t really setting out to find Lisp again.


Having sifted through a few other possibilities, the one that really seemed to fit the bill was Yeti.

Yeti is a functional language with Hindley-Milner type inference, for the JVM, with a relatively simple syntax and interoperability with Java, that has an interactive REPL up front. (See the snappy tutorial.) It seems to be basically the work of one programmer, but a programmer with taste.

The syntax of Yeti looks a bit like the way I remember ML—although on reviewing ML, it turned out not to be as similar as I’d thought. Functions are defined and applied with just about the simplest possible syntax, and the language deduces the types of all values except Java objects. The lack of commas in function application syntax makes it obvious how to do partial application, a straightforward way to obtain closures (functions with context).

Here’s a trivial function, an application of it, and a partial application. The lines starting > are my typing, and the others are type deductions returned by the evaluation environment. Comments start //, as in Java.

> f a b = a + b   // f is a function taking two arguments
f is number -> number -> number = <code$f>
> f 2 3           // apply f to two numbers
5 is number
> f 2             // apply f to one number, returning a new function
<yeti.lang.Fun2_> is number -> number
> g = f 2         // assign that function to g
g is number -> number = <yeti.lang.Fun2_>
> g 3             // apply g to the second number
5 is number

So far, so academic toy language. But the more I apply Yeti to practical problems, the more I find it does work as a practical language.

What is challenging, of course, is that every language or language family has its set of idioms for handling everyday problems, and on the whole I simply don’t know those idioms yet in a functional language. This is the first time I’ve really tried to do anything serious with one. I know the language, roughly, but I don’t really speak the language. I’m still leafing through the phrasebook. My hovercraft is still full of eels.

With most of my incredibly stupid questions on the Yeti mailing list—which get very patient responses, but I really do need to cut back on the late-night stupidity and start reviewing my code in the morning instead—the answer turns out to be, “it’s simpler than I thought”. And I’m really enjoying it.

Why type inference?

A footnote. Why did I want a language with type inference?

Well, I’m lazy of course, and one of the most obvious sources of tedium in C++ and Java is having to type everything out over and over again.

And I’m a bit nostalgic about those undergrad days, no doubt.

But also, I’m increasingly mistrustful of my own work. In languages such as Python and Objective-C the concept of duck typing is widely used. This essentially means employing objects on the basis of their supported methods rather than their nominal class (“if it walks like a duck…”). This relaxed approach reaches a bit of a pinnacle in Ruby on Rails, which I’ve been working with a bit recently—and I find the magic and the assumptions go a bit far for me. I like to have some of the reassurance of type checking.

So, type inference gives you—in theory—the best of both worlds. You get to write your code using duck-typing principles, and the compiler proof-reads for you and checks that your types really do work out.

That’s the theory. Does it scale? Not sure. And if it was so great, wouldn’t it have become more mainstream during the past 30 years? Some moderately widely-used languages, like Haskell, use it, but they’re still only moderately widely-used. So, we’ll see.

There are some obvious immediate disadvantages to type inference. Long error messages, for a start.

And as a C++ guy I somewhat miss function overloading and even (sometimes) operator overloading. A function argument can take more than one type, of course—that’s the whole point—but only if the types can be unified in terms of their properties; you can’t just reuse a function name for a second function that has a similar effect but works on unrelated types.

Most of the situations in which I want function overloading can be handled instead using variant types or module namespaces, both of which work well in Yeti, but sometimes it seems to come down to inventing slightly more awkward function names than I’d really like.

Buy Our Superior Celluloid Cylinders

M., brandishing new telephone: I find it a bit difficult to actually make phone calls, but it’s great for the internet. No, I really like it. The battery’s hopeless though.

Me: How often do you have to charge it?

M.: About every two days. I thought it was defective at first.

A fun mental exercise is to think of an old product that has been superseded by a newer one, and imagine that their roles are reversed—would you be able to sell anyone the old product as a replacement for the new?

VHS tapes, for example: more intuitive seeking than your old DVD player; no unskippable gubbins at the start; the tape remembers where you’d got up to if you stop and restart; easy to record and re-record on. Very practical!

Awful picture and sound quality though, and much too big. Probably wouldn’t sell all that many, but you’ve at least got the beginnings of a promotional campaign there. You could have a crack at it.

Similarly, DVD looks pretty promising as an improvement over Blu-Ray, being superior in almost every practical detail.

I can imagine trying to flog LP records as an alternative format to digital audio, with quite distinct areas of strength, though I can’t see all that much hope for CDs in between.

Selling your “All-New Feature Phone” as a low-cost, lightweight, miniaturised upgrade for a smartphone would be tricky. Popular new technologies often involve new input methods, and users find it very hard to go back. But if you had to try, you could make a pretty good start by talking about batteries.

Imagine being able to go on holiday for a week or more, and still stay in touch without having to ever worry about finding a charger. That’s what the latest battery management technology exclusive to “Feature Phones” brings you!

The original iPhone reintroduced the sort of comically short battery life familiar to those of us who had mobile phones in 1997 or thereabouts, and since then phones seem to have been going about the same way as laptops did during the 2000s—a series of incremental improvements consumed by incrementally more powerful hardware, meaning we ended the decade with much the same order of magnitude of battery life as we started with.