Helvetica is—as every font geek who enjoys a repulsive turn of phrase must agree—one of the most iconic fonts of our time.
Which is a pretty strange thing. Designed in 1957 by two not very famous type designers as a neutral typeface for the Haas foundry in Switzerland, Helvetica might almost have been drawn to Jan Tschichold’s 1928 prescription for the “New Typography”:
Among all the types that are available, the sans-serif is the only one in spiritual accordance with our time… But all the attempts up to now to produce a type for our time are… too artistic, too artificial… to fulfil what we need today.
I believe that no single designer can produce the typeface we need, which must be free from all personal characteristics: it will be the work of a group… For the time being it seems to me that the “jobbing” sans-serifs… are the most suitable for use today.
Helvetica is a basic, functional, not particularly charming typeface. But it has become the Graphic Designer’s Font. It’s currently the best-selling font from fonts.com. There are wallpapers. There’s a film about it (at least, I assume it’s about it—I haven’t seen it, partly because the title is a bit offputting). There are T-shirts, and I’ve seen people wearing them. It is the system font for iPhone and iPad: the squatting toad that contributes to the nagging sense of gloom that accompanies both devices.
No, I don’t like it a great deal. It’s a font to respect, not to like. Its design is almost perfectly invisible, unless you’re the sort of person made gloomy by almost-invisible fonts. As its basic shape goes, it is pretty much unimprovable.
Corporate image, shoddiness, America
No, the reason I write about Helvetica is not because I dislike the font—at least not as much as Alastair Johnston appears to—but because I read something about corporate branding, namely this post by Dustin Curtis. He writes of American Airlines, who recently redesigned their logo from something using Helvetica to something not-using-Helvetica:
American Airlines’ previous visual identity … was a beautiful tribute to modern American design. The simplicity of Helvetica, set in red, white, and blue, and positioned next to an iconic eagle, defined the company with a subtle homage to the country it represents.
After forty-six years, one of the finest corporate brands in history has been reduced to patriotic lipstick.
I’m 40 years old and to me, in my youth, the American Airlines logo represented shoddiness. Crap font, cheesy colours, a bit ungainly, thrown together: the essence of America.
I have a bit more appreciation for it now, but let me explain.
I have only once ridden in a Cadillac, the classic American luxury car. This was fifteen-odd years ago, and it was a car of about this type:
It wasn’t very good. It wallowed and pitched along, and although it had every possible feature inside the cabin, details such as switches seemed to have been thrown together from the cheapest bits and bobs to hand. The dashboard looked like an imitation of that from a 1970s Volvo. Actually, it was a car that I can imagine its owners loving—comfy but awkward and homely. But it wasn’t very well done.
American products have been seen like that for a long time globally. In the UK, “Made in Germany” and “Made in Japan” have been badges of quality for decades, but “Made in America” was a badge only applied to products too embarrassing ever to reach these shores at all.
Helvetica was designed in Switzerland, but it’s now an American font. It’s used mostly by American companies, and has been since American Airlines first adopted it. It suggests the qualities of American products. And the very plainness of the font, the ungainly quality in some letters like the “e”, and the implication from its sheer ubiquitousness that it probably wasn’t chosen but simply picked from the bucket, reinforce those qualities.
In other words—a logotype in Helvetica, in two obvious colours, with the word American in it, is a badge of crap.
Hipness good, for once
The bright spot for Helvetica and its associated American corporate baggage is Apple.
To me, Helvetica brings down iOS: it’s a weak point. But to people who don’t give a crap about fonts, but love using their iPhones and iPads, Helvetica will be a subconscious reinforcer of quality.
Perhaps American Airlines—who had several rather cool logos before they got stuck with the venerated Helvetica one—might have been better making this decision a decade ago. Or else now staying put for a while, just to see.
Next week: “Made in England”, British Leyland, and the shocking legacy of Frutiger in books for children
Spoiler: The answer I gave was not “Mercurial can be understood by human beings”.
Let me know if you spot any mistakes (or just want to flame, of course).
A colleague pointed out that a big problem I didn’t help with at all is how to understand the difference between git and github, which is what people are often really talking about when they talk about sharing code with git. Ironic that we all move to distributed version-control systems and then appoint a single company to run a central server for them.
Following my previous post about functional languages, a suspicious reader asked about the list of prerequisites I gave for a language: purely functional, Hindley-Milner typing, compiling to JVM bytecode, blah blah blah.
Was that list genuine—or was I by any chance just listing the properties of a language I’d stumbled over at random and decided I liked?
The list was in fact real, if tidied up a bit after the fact. I was looking for something like ML, that I could use in a Java-based environment, for fun, and the things I listed roughly describe that combination. (Looking for a specific language reworked “for the JVM” is not such a strange thing to do: there are quite a lot of them.)
There was an outlier in my list of priorities, though, something that I might not have cared about until recently: the REPL.
Read, evaluate, print
A REPL is a fancy name for a very simple thing: the interactive prompt that comes up when you start a language interpreter without giving it the name of a program to run. You type something, and it types the result back at you. If you keep typing, you might build up a whole interpreted program.
ML does have an interactive environment, but I hardly remember using it. It’s more recent experience with Python and Ruby that reminded me just what a nice thing it is. Interactivity makes a big difference when exploring and understanding a new language. I wouldn’t want to start learning a language without it, now.
The funny thing is that until I first went to university, most of my programming experience involved some sort of interactive environment. I’d never used a compiler. All my programming (such as it was) had been in interpreted BASIC with occasional bits of raw machine code, mostly on the ZX Spectrum home computer. Spectrum BASIC (you can try it here) was slow and limited, but it had an interactive prompt and a helpful editor that would prevent you even entering a line if it spotted a syntax error.
So what a magical day it was, at university, when first we learned what real programming looks like:
$ cc my_first_program.c
Segmentation fault (core dumped)
Things got even better as C++ took off; it’s always been slow to compile, and I spent a decade or so working largely on programs that took over an hour to build. That’s a tricky feedback loop, which encourages you to try to work on multiple independent things for each compile cycle—probably not to the benefit of any of them.
Compiling your code is a pretty strange thing to do, really. Interpreted languages and languages with automatic bytecode compiling and runtime loading have been around for decades. Even if they’re a bit slower, surely the first priority of a language should be to make things easier for the programmer and increase the chances of getting a program that actually works correctly.
So, why do we still compile code so much of the time?
Here are a few guesses.
Separating low-level from high-level concerns is hard
Interpreted and byte-compiled languages have the possibility of being nearly as fast as compiled ones. Bytecode evaluation optimisers can be very good, and would presumably be better if more work had gone into them rather than into optimising compilation to machine code; alternatively, domains (such as signal processing) that benefit from low-level optimisation might be written using domain-specific languages in which common activities at very high level are interpreted using blocks of optimised low-level code.
But to get these right—particularly at the domain-specific level—you have to do a very good job of understanding the field, the requirements, and the programmers who will be working in the environment. If it’s not good enough, developers will fall back on something they can trust rather than wait for it to improve.
It’s a known quantity
And developers can trust compiled languages, on the whole. Not every task can use an interpreted or domain-specific language: those environments need to be implemented in something, and they need to run on some platform. I guess it’s simpler in the long run to make stable compilers than to implement a multitude of interpreters directly using low-level machine code.
Compiling “finishes” the program
A psychological trick. When you compile something, it’s done. If it seems to work, you can ship it. If you haven’t yet discovered version control, you can copy the binary somewhere and get a treacherous feeling of safety while you mangle, or lose, the source code. In contrast, with an interpreted language, every edit you make seems to risk breaking the whole program.
Humans can’t read machine code
Finally, when you have a compiled binary you can ship it without the source. Thus the whole edifice of proprietary software sale and distribution.
Twenty years ago, as a maths-and-computing undergraduate at the university of Bath, I was introduced to functional programming using the ML language by the excellent Julian Padget. We undergrads were set the traditional assignment of writing a sed-like text processor in ML, and found it first baffling and then, if we were lucky, rather exciting. I got all enthusiastic about functional languages and, like any proper enthusiast, spent a delightful pointless while trying to design a toy language of my own.
Then I went off and got a job as a C++ programmer.
C++ is a practical, useful language, but it’s also complex, verbose, and baroque, with many different schools of practice. I’ve done a fair bit of work in other languages as well (equally boring ones, like Java) but effectively, I’ve spent much of the past two decades simply continuing to learn C++. That’s a bit crazy.
So when messing about with Android recently, I decided I wanted to try to get some of that old sense of joy back. I went looking for a language with the following properties:
It should be primarily functional rather than object-oriented
It should be strongly-typed, ideally with Hindley-Milner typing (the most exciting thing about ML, for the undergraduate me)
It should have a compiler to JVM bytecode, so I could use it directly in Android apps, and should be able to use Java library code
It should have a REPL, an interactive evaluation prompt for productive messing around
It should be nice to read—it should be obviously a language I wanted to learn, and I was going to be happily guided by personal whim
It must be simple enough for the old, stupid me to have a chance of getting to grips with it
And, while I wasn’t going to care very much how mainstream it was, it did need to be reasonably complete and reliable.
There are lots of languages out there for the JVM, including quite a few functional ones. Scala and Clojure are the best-known.
Scala (here in Wikipedia) is a multi-paradigm language that, for me, has shades of C++ in that it feels like it’s designed to improve on all sorts of things in Java rather than be something simple of its own. It also looks object-oriented first and functional second; doesn’t prioritise interactive evaluation; and although it has a sophisticated type system, it doesn’t do inference on function parameter types. All in all, it just seemed a bit chunky to me.
Clojure (here in Wikipedia) looks more fun. It has a very vibrant community and seems well-loved. It’s basically a Lisp for the JVM, and I’ve written Lisp before. That’s definitely interactive and functional. But I wasn’t really setting out to find Lisp again.
Having sifted through a few other possibilities, the one that really seemed to fit the bill was Yeti.
Yeti is a functional language with Hindley-Milner type inference, for the JVM, with a relatively simple syntax and interoperability with Java, that has an interactive REPL up front. (See the snappy tutorial.) It seems to be basically the work of one programmer, but a programmer with taste.
The syntax of Yeti looks a bit like the way I remember ML—although on reviewing ML, it turned out not to be as similar as I’d thought. Functions are defined and applied with just about the simplest possible syntax, and the language deduces the types of all values except Java objects. The lack of commas in function application syntax makes it obvious how to do partial application, a straightforward way to obtain closures (functions with context).
Here’s a trivial function, an application of it, and a partial application. The lines starting > are my typing, and the others are type deductions returned by the evaluation environment. Comments start //, as in Java.
> f a b = a + b // f is a function taking two arguments
f is number -> number -> number = <code$f>
> f 2 3 // apply f to two numbers
5 is number
> f 2 // apply f to one number, returning a new function
<yeti.lang.Fun2_> is number -> number
> g = f 2 // assign that function to g
g is number -> number = <yeti.lang.Fun2_>
> g 3 // apply g to the second number
5 is number
So far, so academic toy language. But the more I apply Yeti to practical problems, the more I find it does work as a practical language.
What is challenging, of course, is that every language or language family has its set of idioms for handling everyday problems, and on the whole I simply don’t know those idioms yet in a functional language. This is the first time I’ve really tried to do anything serious with one. I know the language, roughly, but I don’t really speak the language. I’m still leafing through the phrasebook. My hovercraft is still full of eels.
With most of my incredibly stupid questions on the Yeti mailing list—which get very patient responses, but I really do need to cut back on the late-night stupidity and start reviewing my code in the morning instead—the answer turns out to be, “it’s simpler than I thought”. And I’m really enjoying it.
Why type inference?
A footnote. Why did I want a language with type inference?
And I’m a bit nostalgic about those undergrad days, no doubt.
But also, I’m increasingly mistrustful of my own work. In languages such as Python and Objective-C the concept of duck typing is widely used. This essentially means employing objects on the basis of their supported methods rather than their nominal class (“if it walks like a duck…”). This relaxed approach reaches a bit of a pinnacle in Ruby on Rails, which I’ve been working with a bit recently—and I find the magic and the assumptions go a bit far for me. I like to have some of the reassurance of type checking.
So, type inference gives you—in theory—the best of both worlds. You get to write your code using duck-typing principles, and the compiler proof-reads for you and checks that your types really do work out.
That’s the theory. Does it scale? Not sure. And if it was so great, wouldn’t it have become more mainstream during the past 30 years? Some moderately widely-used languages, like Haskell, use it, but they’re still only moderately widely-used. So, we’ll see.
There are some obvious immediate disadvantages to type inference. Long error messages, for a start.
And as a C++ guy I somewhat miss function overloading and even (sometimes) operator overloading. A function argument can take more than one type, of course—that’s the whole point—but only if the types can be unified in terms of their properties; you can’t just reuse a function name for a second function that has a similar effect but works on unrelated types.
Most of the situations in which I want function overloading can be handled instead using variant types or module namespaces, both of which work well in Yeti, but sometimes it seems to come down to inventing slightly more awkward function names than I’d really like.
My Dad asked me recently what sort of computer he should buy to replace his ten-year-old HP laptop. And what sort of phone should he get to replace his old Nokia? And while I was at it, should he get one of those tablet things?
There are a lot of possible options at the moment, because all kinds of devices from smartphones to traditional PCs have become broadly capable of doing the same work, and because a whole raft of new Windows 8 laptops and convertibles have just arrived to clutter up the shelves.
Therefore I’d suggest mostly ignoring the nominal capability and specs of any device, and considering instead how it feels to hold and operate and what ecosystem it is part of.
Let me explain, and then give some more concrete advice.
This slightly absurd term describes a set of services and systems that work together, many of which are likely to have been provided by the company that made the device’s operating system.
Increasingly, when you buy a device, you are making a decision to participate in its maker’s ecosystem: it will make your life easiest if you are prepared to use backup, file and photo sharing, music download, email, mapping, browsing, app installation, and other services all from the same supplier.
For example, if you buy an Android device, you’ll be most content if you also use Google mail, maps, marketplace, etc. Buy a Mac or an iPhone, and you’ll have the happiest time if you use Apple services wherever they exist. Windows 8 and Windows Phone expect you to have a Microsoft account and to use it. If you have two devices, say a laptop and a phone, they’ll get on best if they’re both within the same ecosystem as well.
You can make a conscious decision to mix and match—I do that myself, somewhat, because it pains me to side with any one megacorporation more than I have to—but it can be heavy going. If the idea of understanding what you’re doing and why you’re doing it appeals to you more than having an easy life, then install Linux and subscribe to no single ecosystem; I’ll be happy to help out. But I’m guessing you don’t really want to do that.
So no, the usual thing seems to be to decide which company you dislike least, then let that one have your credit card details and as much goodwill as you can muster. And that means picking one of: Apple (with OS/X and iPhone/iPad), Google (with Android), or Microsoft (with Windows and Windows Phone).
Modern computing devices, from smartphones to PCs, are increasingly touch-driven (either through a multi-touch touchpad or a touchscreen), portable, and versatile. The way you hold and interact with them does matter.
I’d strongly suggest you start by trying out the best devices you can find from each ecosystem, hands-on, either by borrowing from a friend or in a very relaxed shop. Decide which one you enjoy the basic interactions with the most.
If the design, interaction and animation (and materials and heft, for specific devices) please you every time you pick it up, you’re probably going to be happy with it. If they annoy you, you’re not. If it’s ugly and inconvenient now, it’ll be ugly and inconvenient in five years’ time.
These are the things you can buy at the moment.
Laptops you know. They run either Windows (if PCs) or OS/X (if Macs). Some of the Windows 8 ones now have touchscreens, but not all of them (and nor do any of the Macs).
Tablets such as Apple’s iPad, the Google/Samsung Nexus 10, or the Microsoft Surface are slatelike touchscreen devices in which a separate keyboard is strictly optional (there is a “virtual” one on the screen). They typically run one program at a time, full-screen, rather than having multiple separate windows side by side, and the programs are redesigned for touch rather than mouse operation (the buttons are bigger and they have fewer menus, for example). All software is installed from a central “app store” run by the operating system manufacturer.
Smartphones are small tablets that can make phone calls. Most mobile phones nowadays are smartphones.
Things to bear in mind
A modern smartphone is a computer. It can do practically anything, but it’s sometimes fiddly because of the small size, and it has amazingly awful battery life compared with a classic mobile phone—be prepared to charge it every day. If you buy a nice new phone and make use of it as a handheld computer, you’ll probably find you use your laptop less.
Tablets overlap with both smartphones and laptops. If you have a smartphone, the laptop or tablet is likely to take jobs like “reading long documents, and doing anything that needs a lot of typing”. Don’t buy both a tablet and a laptop, just make sure whatever you get has a good clear screen and you can stand it up on a desk and type with it.
Proper keyboards are available for every kind of tablet: you can always get something you can either plug in or attach wirelessly. But convertible tablets (with a keyboard stand included, like the Asus Transformer, right) are nice too. They’re very like laptops to use and can be folded up and packed away the same way, but you can also pull off the screen and sit on the sofa with it. Most run Android.
There are also small tablets, but… While the iPad, Nexus 10, Transformer series, and Surface are in the 10-11″ diagonal range, there are also several in the 7-8″ range like the iPad Mini or Nexus 7. The small ones are natty and better for carrying around, but less good for sofa-surfing and can’t really replace a laptop.
If you’re buying an Android device, look for Android 4 or newer and get a Google Nexus if you can. They sell a phone (the Nexus 4), a small tablet (Nexus 7) and a big tablet (Nexus 10) and they’re all pretty good. Being Google’s “own” devices, they have good compatibility and more updates. You can’t generally get them through mobile network contracts though.
If you’re buying a Windows 8 laptop, get one with a touchscreen. Windows 8 makes very little sense without a touchscreen. You can still use a mouse as well.
Windows 8 is extra-confusing because of the existence of both Windows 8 and “Windows RT”. These are essentially the same, except that Windows RT can’t run any “legacy” Windows software apart from Microsoft Office: it only runs touch-optimised full-screen apps from the Windows app store, of which there are not all that many available yet. Windows RT is found on tablets and some laptops. It’s a perfectly capable operating system, but there’s a big risk of disappointment if you want to run arbitrary Windows applications from around the internet and discover too late that you can’t.
So the range of applications available matters, but it’s not the be-all and end-all. Off the top of my head: Apple’s iPhone has the most apps, then Android phones, then the iPad, then desktop operating systems (Windows, OS/X), then Android tablets, and in last place Windows Phone and Windows RT. Numerically the difference from first to last pretty big, but it can be oversold: in practice you won’t find many things you can’t do, nor run out of new stuff to try out, on any of them.
You can safely ignore any review in which the star rating appears to be correlated to how fast the computer’s processor is. That’s practically irrelevant nowadays. Do test how smoothly the screen scrolls and zooms though.
Don’t forget to check whether you use any software that absolutely must continue to run on whatever you replace your laptop with. In most cases, all you need is software that does the same sort of thing (it doesn’t have to be exactly the same software) but you don’t want to get caught out if there’s anything specific you rely on.
The whole mobile-network contract business is an extra layer or three of bafflement that I can’t really help with. I generally buy hardware unsubsidised and stick a pay-as-you-go SIM in it.
Give each of the ecosystem contenders a test run, and then, from the options below, pick the phrase you most agree with and read that bit!
(Although by the time you’ve given each them a test run, you may well already know what you want. That would be a good outcome.)
I’m totally ignoring price here, although sadly the most interesting options almost always turn out rather expensive.
“I really like the way the Apple things work” Well, that was easy. If you’re dead set on having a laptop or you want as much flexibility and control as possible, then you want a MacBook Air (probably the 13″ size, although the keyboard is just as titchy as the one in the 11″). Otherwise, get an iPad and forget about the laptop. Either way, buy the laptop or tablet first, then think about phones (the phone to get is obviously an iPhone, it’s just a question of which one and that basically comes down to price).
“Windows 8 and Windows Phone appeal to me, and I don’t think of Microsoft as an objectionable enemy” You’d probably find a Windows Phone 8 phone (any one, though the Nokia Lumia 920 has the most lovely screen) and a touchscreen Windows 8 laptop a good combination. Look at the Lenovo Yoga 13, which is a fine laptop that I predict will sell half-a-dozen at best because of the weird way it’s being displayed on a stand in the shops (the screen flips back to make it resemble a large and heavy tablet, but it’s really a laptop). Or consider the Samsung Series 5 Touch laptop or possibly the ATIV SmartPC convertible. Although Microsoft’s Surface RT is a beautiful object that I’d like to recommend, it isn’t yet quite the laptop replacement it thinks it is. There’s a Pro version due out in a few weeks that might be worth a look, though.
“I use a few Google services already, and I’ve tried at least one Android device I thought was nice to use” An Android tablet convertible like the Asus Transformer series can in principle replace a laptop quite well. Try one out, but if you’re thinking “hm, maybe Android might work” it’s probably cheaper to give it a go with a phone first. Google’s Nexus 4 is the obvious choice if you can find one.
“Those touchscreen laptops and tablets are all a bit small, I like my bigger PC” There are some reasonable touchscreen laptops with somewhat larger screens, including several from HP like the Envy TouchSmart 14. I hesitate to recommend one because I’ve actually never seriously used Windows 8 with a touchscreen on a larger screen. It might be a bit tiring. Do try it though.
“This is still all too complicated” Then stick with what you’ve got. The new Windows 8 machines have only just come out, and everything will look a bit simpler in six months’ time when the disasters have subsided and the new-fangled things have got cheaper.
What would I do?
If: money was no object; I had no corporate loyalty and lacked my affection for open Unix-type systems; I wanted to be able to do anything except programming; I didn’t have a laptop, tablet or smartphone already; and I didn’t mind if my phone was too big to fit in a small pocket… I’d buy a Nokia Lumia 920 and a Lenovo Yoga 13.
That’s because I like the Windows 8 look and feel, the different Windows 8 devices work well together, and both of these are attractive well-made objects that are a pleasure to use. I’d pick the Nokia over the otherwise excellent HTC 8X because of its better screen and camera and the inclusion of Nokia maps with navigation.
But in real life, I couldn’t afford that. If I wanted to keep the price down a bit and avoid being too locked in to any one ecosystem, I’d look at a Samsung Series 5 touchscreen laptop and a second-hand unlocked Google Nexus S phone from eBay. But I would go and have a play with a Surface RT tablet in John Lewis first, just in case. It’s a nicer physical object, for all its limitations.
And if money was the object—if it was the main thing that mattered, but the other conditions were the same—I might buy the entry-level full-size iPad and nothing else. It’s much cheaper than a touchscreen laptop and has a lot of software. I don’t really go for the visual design, but it’s cheaper than the alternatives I do really like, the basic interaction and feel are fine, and having all those apps available counts for a lot.
Of course, being a typical human creature I’d really do none of the above. I’d just buy whatever I happened to like the look of on the day and rationalise it afterwards. I trust you’ll do the same!