Linux

tagsI write plenty of tedious posts about computers and technology, and I usually tag them according to what they’re about.

As I write this, the tag cloud for this blog looks like the picture on the left—Apple and Microsoft loom large, Nokia and Oracle get a look in, and there’s no reference to Linux at all.

But Linux is the main operating system I use, and it has been for the last 15 years or more.

I never write about it because, like the boy in the German child joke, I’m content with it. I write about things that fascinate me, and there’s nothing less fascinating than a system that does what you expect it to, again and again.

Although I also develop software for Windows, OS/X, and Android among others, Linux is my home platform.

I like that it gives me a sense of independence from any particular platform I might deploy to. I like that it allows me to make my own decisions about the type of desktop environment I choose. (Never trust an OS that won’t allow you to change the system font.) I like the transparency of the development environment, and I appreciate being given the opportunity to find out how anything in it works—even though I don’t take as much advantage as I might.

So far, using mostly Linux has been a fine way to observe developments in other operating systems, from just enough distance not to get too caught up in any one of them.

A touch of froth

A jolt, though, comes with the arrival of touch interfaces. I’m not the only one to be surprised to find how pleasant a touch screen is to use with a laptop. For me, Apple had it wrong: though familiarity means I still prefer a mouse for detail work, I’d rather have a touch screen than a trackpad.

Maybe I just haven’t used touch screens enough to become really fatigued. But I wonder whether the research might not have underestimated how fatiguing the crabbing action of using a touchpad is. I don’t think “I’ve been waiting all my life for this touch screen”; I think “thank goodness I don’t have to use the touchpad”.

I’ve heard it remarked that innovative input devices interest consumers in a way that novel output devices seldom do. There are many examples of new input devices becoming mainstream, sometimes in wildly popular ways: the joystick, the mouse, the D-pad, the touchpad, gaming controllers with accelerometers and gyroscopes, computer vision devices (the Kinect) and so on. Meanwhile various innovations in output (such as 3D and very high-resolution screens) have appeared repeatedly and been largely ignored—unless they came in packages that were attractive for other reasons, such as the LCD display with its slender physical dimensions.

So, over the years I’ve taken quite good advantage of the ability to pick and choose my desktop interface on Linux. Like all self-regarding programmers, I’ve used my own window manager. I’ve used KDE, until I switched away when KDE4 arrived. Then I used GNOME, until I switched away when GNOME 3 arrived. Right now I’m using XFCE4. But it’s not at all touch-friendly, and nor are any of the applications I use. This one has so far completely passed me by.

In short, then, those Ubuntu and Gnome people that I’ve probably been rather rude about might have had a point. There was some reason to be piddling about with the basics of the user interface after all. I need to start finding out whether Linux, other than Android, can work well in the touch screen world.

Windows Phone: a bit like BeOS

Today’s possibly stretching-a-point Technology Analogy

In a previous article I compared the situation of Windows 8 on the desktop to that of OS/2 in the late 80s.

Windows Phone 8 is in a different position. While Windows 8 gets its awkwardness from the need to provide compatibility with the dominant platform—which in this case means earlier versions of Windows—the dominant platforms competing with Windows Phone are iOS and Android. And it’s totally incompatible with both.

So, why choose Windows Phone? Not because it has greater capabilities, all in all, than its competition. It doesn’t have any very significant platform-exclusive applications. It isn’t any more open (in either a useful or fun kind of way). There are two reasons you might choose it: a preference for its interaction design, or integration with some networked services.

BeOS is an operating system dating from the mid-90s developed, according to Wikipedia, “on the principles of clarity and a clean, uncluttered design”. (Sounds familiar?) It was pretty to look at and nice to use. It had decent networking support and made good use of the hardware available to it.

But it was always going to have niche appeal. By the time of its release, Windows 95 was dominant and generally tolerated by mass-market users, while Unix-based operating systems like Linux, FreeBSD, and NeXTSTEP were working their way down from higher-end workstations with hacker appeal. BeOS was incompatible, no cheaper, no more open, and ultimately more limited by lack of useful applications. It remains a likeable curio.

 

Can you develop research software on an iPad?

I’ve just written up a blog article for the Software Sustainability Institute about research software development in a “post-PC” world. (Also available on my project’s own site.)

Apart from using the terms “post-PC”, “touch tablet”, “app store”, and “cloud” a disgracefully large number of times, this article sets out a problem that’s been puzzling me for a while.

We’re increasingly trying to use, for everyday computing, devices that are locked down to very limited software distribution channels. They’re locked down to a degree that would have seemed incomprehensible to many developers ten or twenty years ago. Over time, these devices are more and more going to replace PCs as the public idea of what represents a normal computer. As this happens, where will we find scientific software development and the ideals of open publication and software reuse?

I recognise that not all “post-PC” devices (there we go again) have the same terms for software distribution, and that Android in particular is more open than others. (A commenter on Twitter has already pointed out another advantage of Google’s store that I had overlooked in the article.) The “openness” of Android has been widely criticised, but I do believe that its openness in this respect is important; it matters.

Perhaps the answer, then—at least the principled answer—to the question of how to use these devices in research software development is: bin the iPad; use something more open.

But I didn’t set out to make that point, except by implication, because I’m afraid it simply won’t persuade many people. In the audio and music field I work in, Apple already provide the predominant platform across all sizes of device. If there’s one thing I do believe about this technology cycle, it’s that people choose their platform first based on appeal and evident convenience, and then afterwards wonder what else they can do with it. And that’s not wrong. The trick is how to ensure that it is possible to do something with it, and preferably something that can be shared, published, and reused. How to subvert the system, in the name of science.

Any interesting schemes out there?