Proprietary Unix

From 1992 to 1998, every paid job I did came with a Unix workstation on my desk. Admittedly that only covers three employers, but it covers a lot of different kinds of workstation.

In those days, selling Unix software (unless you could dictate the hardware as well) involved a lot of porting, and companies would build up a library of different workstations to port and test on. A bit like Android development nowadays, but much more expensive.

At some point I used, or had in the rooms around me, machines running

  • Silicon Graphics IRIX on MIPS processors (the SGI Indigo and Indy—the natty coloured boxes)
  • Sun Solaris on SPARC (with my favourite keyboards, the Sun Type 5)
  • SunOS 4 on Motorola 68K (immense single-bit-depth mono screens)
  • DEC Ultrix on MIPS, and OSF/1 on Alpha (everyone wanted the Alpha)
  • SCO Unix on Intel x86 (nobody wanted that)
  • Hewlett-Packard HP-UX on HP Precision Architecture (nice hardware, didn’t enjoy the OS)
  • Data General DG/UX on AViiON (not a very likeable machine)
  • IBM AIX on POWER architecture (fast, though I was never into the rattly keyboards)
  • and a System V implementation from Godrej & Boyce of India running on Intel i860

That was up to 1998.

From 1999 to 2014, every paid job I’ve done—other than excursions into Windows for specific bits of work—has come with an Intel PC running Linux on my desk.

I suppose proprietary Unix workstations made something of a comeback in the shape of Apple’s Mac Pro line with OS/X. I think of the dustbin-shaped Mac Pro as a successor to SGI workstations like the Indy and O2: the sort of thing you would want to have on your desk, even if it wasn’t strictly what you needed to get the job done.

Helios

I found my old Russian SLR camera a few days ago.

It’s a Zenit EM Olympic edition, a tie-in from the 1980 Moscow Olympics. The Russian Zenit, and more so its East German cousin the Praktica, were popular manual SLR cameras for beginner photographers in the UK in the 80s and 90s. I got mine second-hand for perhaps 20 quid in the early 90s. It’s big, very heavy, and clumsy to operate, and I was never a very good photographer—I doubt if I ever got more than two or three acceptable photos from it. Of course I decided it must be an awful camera.

Nowadays I use an Olympus E-PL3 Micro Four Thirds system camera. I have enough residual interest in the mechanics of photography to enjoy using a “proper” camera rather than a good smartphone and this is a light, efficient model that has worked well for me.

When I found the old Zenit, though, I thought—hey, can I use this lens with my new camera? Was it really as awful as I thought, or was it just me?

heliosIt turns out to be quite easy to do. The lens is a Helios 44m, a very common Russian make with a slightly antique fitting, the M42 thread. A local camera shop had an adapter.

The lens weighs more than the camera body: almost as much as a full jar of marmalade. And it’s almost entirely manual.

Manual focus, uh oh

When I bought the adapter, the guy in the shop insisted I would get no help at all from the camera: manual focus, manual aperture, manual shutter, and no metering. That turned out not to be true—focus and aperture are manual, but the camera can still handle metering and shutter speed.

And it turns out that it was just me: the Helios is quite a good lens.

Swan, Round Pond

Manual focus is… tricky… and I’m not very good at it, but manual focus and aperture are a lot more fun when you have instant replay and an automatic shutter. A heavy lens like this isn’t too bad to hold, either: you just hold the camera by the lens.

What does feel a bit more specialised is the new “equivalent” focal length. The lens has a 58mm focal length, which is unchanged of course, but the Micro Four-Thirds sensor is half the size of the 35mm negative giving an effective equivalent of 116mm focal length on a 35mm camera: pretty zoomy. Not the sort of thing you can just wander around taking scenes with, though it’s a good focal length for portraits, architectural detail, and animals.

(For comparison, it’s about the same frame as the well-regarded Olympus 60mm macro lens. Here: I took the same photo with the Helios and the Olympus lens.)

Squirrel in Hyde Park

The Helios is known for a distinctive circular light pattern in the out-of-focus backgrounds, which is appealing, if not what you’d always want.

Put things together

I’ve really enjoyed using this lens, but that doesn’t have a great deal to do with its optical qualities. It’s a decent lens, but I already own a better one of a similar focal length. (Though if I’d found my old camera and tried out the lens earlier, I might not have bought the comparatively expensive Olympus 60mm.)

But I do enjoy the history and (literal) weight of this lens, and I enjoy having a manual focus ring and being required to use it.

I don’t think I would ever—even now—set one of my autofocus lenses to manual focus, even though they all have focus rings, because I know I get better photos out the other end with autofocus. I’m just not good enough at it. But I’m delighted that I found the old camera and did something with it.

And it’s exciting to be able to make your camera out of all these different bits.

To be able to take a component built to a standard devised in 1949 and stick it on a very contemporary camera—I feel this is revealing, not so much of the future-proofing of the original standard or the backward compatibility of the new one, as of the fact that cameras are still mostly optical instruments and glass optics have been made to much the same, wonderfully high, standards for many decades now.

Functional programming and the joy of learning something again

Twenty years ago, as a maths-and-computing undergraduate at the university of Bath, I was introduced to functional programming using the ML language by the excellent Julian Padget. We undergrads were set the traditional assignment of writing a sed-like text processor in ML, and found it first baffling and then, if we were lucky, rather exciting. I got all enthusiastic about functional languages and, like any proper enthusiast, spent a delightful pointless while trying to design a toy language of my own.

Then I went off and got a job as a C++ programmer.

C++ is a practical, useful language, but it’s also complex, verbose, and baroque, with many different schools of practice. I’ve done a fair bit of work in other languages as well (equally boring ones, like Java) but effectively, I’ve spent much of the past two decades simply continuing to learn C++. That’s a bit crazy.

So when messing about with Android recently, I decided I wanted to try to get some of that old sense of joy back. I went looking for a language with the following properties:

  • It should be primarily functional rather than object-oriented
  • It should be strongly-typed, ideally with Hindley-Milner typing (the most exciting thing about ML, for the undergraduate me)
  • It should have a compiler to JVM bytecode, so I could use it directly in Android apps, and should be able to use Java library code
  • It should have a REPL, an interactive evaluation prompt for productive messing around
  • It should be nice to read—it should be obviously a language I wanted to learn, and I was going to be happily guided by personal whim
  • It must be simple enough for the old, stupid me to have a chance of getting to grips with it
  • And, while I wasn’t going to care very much how mainstream it was, it did need to be reasonably complete and reliable.

There are lots of languages out there for the JVM, including quite a few functional ones. Scala and Clojure are the best-known.

Scala (here in Wikipedia) is a multi-paradigm language that, for me, has shades of C++ in that it feels like it’s designed to improve on all sorts of things in Java rather than be something simple of its own. It also looks object-oriented first and functional second; doesn’t prioritise interactive evaluation; and although it has a sophisticated type system, it doesn’t do inference on function parameter types. All in all, it just seemed a bit chunky to me.

Clojure (here in Wikipedia) looks more fun. It has a very vibrant community and seems well-loved. It’s basically a Lisp for the JVM, and I’ve written Lisp before. That’s definitely interactive and functional. But I wasn’t really setting out to find Lisp again.

Yeti

Having sifted through a few other possibilities, the one that really seemed to fit the bill was Yeti.

Yeti is a functional language with Hindley-Milner type inference, for the JVM, with a relatively simple syntax and interoperability with Java, that has an interactive REPL up front. (See the snappy tutorial.) It seems to be basically the work of one programmer, but a programmer with taste.

The syntax of Yeti looks a bit like the way I remember ML—although on reviewing ML, it turned out not to be as similar as I’d thought. Functions are defined and applied with just about the simplest possible syntax, and the language deduces the types of all values except Java objects. The lack of commas in function application syntax makes it obvious how to do partial application, a straightforward way to obtain closures (functions with context).

Here’s a trivial function, an application of it, and a partial application. The lines starting > are my typing, and the others are type deductions returned by the evaluation environment. Comments start //, as in Java.

> f a b = a + b   // f is a function taking two arguments
f is number -> number -> number = <code$f>
> f 2 3           // apply f to two numbers
5 is number
> f 2             // apply f to one number, returning a new function
<yeti.lang.Fun2_> is number -> number
> g = f 2         // assign that function to g
g is number -> number = <yeti.lang.Fun2_>
> g 3             // apply g to the second number
5 is number

So far, so academic toy language. But the more I apply Yeti to practical problems, the more I find it does work as a practical language.

What is challenging, of course, is that every language or language family has its set of idioms for handling everyday problems, and on the whole I simply don’t know those idioms yet in a functional language. This is the first time I’ve really tried to do anything serious with one. I know the language, roughly, but I don’t really speak the language. I’m still leafing through the phrasebook. My hovercraft is still full of eels.

With most of my incredibly stupid questions on the Yeti mailing list—which get very patient responses, but I really do need to cut back on the late-night stupidity and start reviewing my code in the morning instead—the answer turns out to be, “it’s simpler than I thought”. And I’m really enjoying it.

Why type inference?

A footnote. Why did I want a language with type inference?

Well, I’m lazy of course, and one of the most obvious sources of tedium in C++ and Java is having to type everything out over and over again.

And I’m a bit nostalgic about those undergrad days, no doubt.

But also, I’m increasingly mistrustful of my own work. In languages such as Python and Objective-C the concept of duck typing is widely used. This essentially means employing objects on the basis of their supported methods rather than their nominal class (“if it walks like a duck…”). This relaxed approach reaches a bit of a pinnacle in Ruby on Rails, which I’ve been working with a bit recently—and I find the magic and the assumptions go a bit far for me. I like to have some of the reassurance of type checking.

So, type inference gives you—in theory—the best of both worlds. You get to write your code using duck-typing principles, and the compiler proof-reads for you and checks that your types really do work out.

That’s the theory. Does it scale? Not sure. And if it was so great, wouldn’t it have become more mainstream during the past 30 years? Some moderately widely-used languages, like Haskell, use it, but they’re still only moderately widely-used. So, we’ll see.

There are some obvious immediate disadvantages to type inference. Long error messages, for a start.

And as a C++ guy I somewhat miss function overloading and even (sometimes) operator overloading. A function argument can take more than one type, of course—that’s the whole point—but only if the types can be unified in terms of their properties; you can’t just reuse a function name for a second function that has a similar effect but works on unrelated types.

Most of the situations in which I want function overloading can be handled instead using variant types or module namespaces, both of which work well in Yeti, but sometimes it seems to come down to inventing slightly more awkward function names than I’d really like.

Buy Our Superior Celluloid Cylinders

M., brandishing new telephone: I find it a bit difficult to actually make phone calls, but it’s great for the internet. No, I really like it. The battery’s hopeless though.

Me: How often do you have to charge it?

M.: About every two days. I thought it was defective at first.

A fun mental exercise is to think of an old product that has been superseded by a newer one, and imagine that their roles are reversed—would you be able to sell anyone the old product as a replacement for the new?

VHS tapes, for example: more intuitive seeking than your old DVD player; no unskippable gubbins at the start; the tape remembers where you’d got up to if you stop and restart; easy to record and re-record on. Very practical!

Awful picture and sound quality though, and much too big. Probably wouldn’t sell all that many, but you’ve at least got the beginnings of a promotional campaign there. You could have a crack at it.

Similarly, DVD looks pretty promising as an improvement over Blu-Ray, being superior in almost every practical detail.

I can imagine trying to flog LP records as an alternative format to digital audio, with quite distinct areas of strength, though I can’t see all that much hope for CDs in between.

Selling your “All-New Feature Phone” as a low-cost, lightweight, miniaturised upgrade for a smartphone would be tricky. Popular new technologies often involve new input methods, and users find it very hard to go back. But if you had to try, you could make a pretty good start by talking about batteries.

Imagine being able to go on holiday for a week or more, and still stay in touch without having to ever worry about finding a charger. That’s what the latest battery management technology exclusive to “Feature Phones” brings you!

The original iPhone reintroduced the sort of comically short battery life familiar to those of us who had mobile phones in 1997 or thereabouts, and since then phones seem to have been going about the same way as laptops did during the 2000s—a series of incremental improvements consumed by incrementally more powerful hardware, meaning we ended the decade with much the same order of magnitude of battery life as we started with.

End of the laptop line

I realised not long ago that, for my purposes, laptop PCs have stopped improving.

It didn’t happen recently: it just took me a long time to notice.  In fact I reckon it happened about five years ago.

My decline and theirs

Sony Vaio PCG-R600MX (2002)The first laptop I bought with my own money was a Vaio R600MX in 2002. It must have cost about £1400. A lovely hardware design, it had a beautiful case and keyboard and a clear 12.1″ screen, but it was noisy, even for the time it was slow, battery life wasn’t good, and the screen was only 1024×768.

Still, it was easy to carry, and that’s the first thing I looked for in a laptop because I only used it when on the move. I stuck with similar criteria for years after that, up to a Vaio Z in 2010.

But the way I work has changed during the last five years or so: I lost the desk and desktop PC I had at home when the space was upgraded to a chest of drawers; I do less number-crunching than I used to, and rely less on the power of a desktop machine. I can “get away with” using a laptop more.

I now have most of my data online, so I no longer have any need to carry the same computer between work and home.  And having a family I travel less.  I haven’t left Europe since 2002, meaning that first Vaio is still the only computer I’ve ever tried to use on a plane.

So I now work on a laptop far more than I used to, but it doesn’t actually have to move about as much.

During the same five years, something bad has happened to laptops.

Screens have got shorter and shorter and gone all shiny. Keyboards have turned flat and featureless. The hardware has got faster, but quite a bit of that is down to solid-state drives—which you can retrofit in any machine. For the former me, an 11″ MacBook Air would have seemed like the ideal machine: to the current me, it starts to look a bit fiddly.

When all this eventually dawned on me, I made a couple of trips to the Queensway computer market and to eBay and discovered that a Thinkpad T60, made in 2007, now costs about £150.

Quadratisch, praktisch, gut

There are machines that do individual things better than the T60, but nothing else I’ve found yet is so consistently nice to use.

Thinkpad T60The 14″ non-widescreen high resolution display! All those lines of text!  Funny to think this was once commonplace.

A proper bumpy keyboard!  And a good one, if not quite your Sun Type-5.

Of course it’s not fast as such, but it was certainly fast “only” five years ago, and it’s good enough, especially with another 70 quid spent on an SSD, to feel broadly contemporary rather than totally antique.

(Software no longer seems to bloat as rapidly as it used to, either because I’ve been fixed in the same tasks and development environments for too long, or because the increasing proliferation of lower-level general-purpose hardware and the limitations of Moore’s law have moderated other developers’ ambitions.)

Very solidly built; easy to find spare parts and replacement batteries; battery life isn’t bad. The styling is a bit divisive, but it appeals to me.

Finally, the T60 was the last Thinkpad that actually said IBM on it. I’m a sucker for that.

And a hundred and fifty quid!  Just writing it makes me want to go and buy another… although even at that price, I can’t currently afford to. Even so, it puts dramatically into perspective the amount I’ve spent on new hardware over the years.

Is this just because I’m becoming obsolete along with the computers I use? Is it an affectation that I’ll forget all about next time something really shiny turns up? Or is it a symptom of the PC age running out of appealing novelties?