Console games and local multiplayer

We just got a Playstation 4, and have been a bit disappointed by the lack of good local multiplayer support in the games we’ve tried so far. I reckon every console game should support local multiplayer if it can: after all that’s the main thing that makes a console different from a PC.

(To be honest it pains me even having to write “local” multiplayer. I think of it as just multiplayer. The idea that players could exist elsewhere on the internet is flying in the face of nature.)

We got two games bundled with our PS4, Overwatch and Ratchet & Clank, both of which are neat games but neither of which supports local multiplayer at all. In both cases this is naively a bit of a letdown. Overwatch is a team game, exactly the sort of thing you want to play while sitting around with your friends (but you can’t! sorry), and Ratchet & Clank is a game with two protagonists that you might hope to be able to control independently in the style of the Lego adventures (but oops! sorry again).

It’s possible we just made a bad choice of bundled games, as there were other options. I don’t think any of the others were any better, but it’s not easy to tell from reading around, because summaries of the games online often don’t talk about this. (Does Uncharted 4 have local multiplayer? Does the PS4 version of DOOM? Though that’d be a bit wrong, DOOM is supposed to be played alone, jumpy and sweating and with the lights off.)

We’re on the lookout for possibilities. I’m sure there must be plenty; they’re just perhaps not the most promoted titles, maybe because their prime audience is not game journalists. So far we have Rocket League and the evergreen FIFA, both of which are pretty nice. I’m fairly clueless about the Playstation landscape, having a general affinity for Nintendo. Let me know if you have any more suggestions.


F♯ has possibilities

A couple of months ago, Microsoft announced that they were buying a company called Xamarin, co-founded by the admirable Miguel “you can now flame me, I am full of love” de Icaza. (No sarcasm — I think Miguel is terrific, and the delightfully positive email linked above really stuck with me; if only I could have that attitude more often.)

As I understand it, Xamarin makes

  1. the Mono runtime, a portable third-party implementation of Microsoft’s .NET runtime for the C# and F# programming languages
  2. the eponymous Xamarin frameworks, which can be used with .NET to develop mobile apps for iOS and Android
  3. plugins for the Visual Studio IDE on Windows and the MonoDevelop IDE on OS/X to support mobile platform builds using Xamarin (the MonoDevelop-plus-plugins combo is known as Xamarin Studio).

Then a couple of days ago, the newly-acquired Xamarin declared

  1. that the Mono runtime was switching from LGPL/GPL licenses to MIT, allowing no-cost use in commercial applications
  2. that Microsoft were providing a patent promise (which I have not closely read) to remove concerns for commercial users of Mono
  3. that the Xamarin frameworks for iOS and Android, and the IDE plugins, were now free (of cost)
  4. that at some future point the Xamarin frameworks would be open sourced

I’m trying to unpick exactly what this could mean to me.

According to this discussion on Hacker News, the IDE plugins are remaining proprietary (which appears to mean that no IDE on Linux will be supported, since the IDE plugins are not currently available for Linux) but that “the Xamarin runtime and all the commandline tools you need to build apps” will be open sourced.

What this means

as I understand it,

  • Developers working on proprietary .NET applications will be able to build and release versions for other platforms than Windows, using Mono, at no extra cost
  • Developers working on open source .NET applications will be able to publish the ensemble with Mono under the MIT license if desired and will (apparently) be free of patent concerns
  • Developers will be able to make both proprietary and open source .NET applications for iOS and Android at no cost using Windows and OS/X
  • There is a possibility of being able to do builds of the above using Linux as well once the SDK is open, though probably without an IDE

Unrelatedly, there are separate projects afoot to provide native code and to-Javascript compilers for .NET bytecode.

What I’m interested in

I do a range of programming including a mixture of signal-processing and UI work, and am interested in exploring comprehensible, straightforward functional languages in the ML family (I wrote a little post about that here). Unlike many audio developers I have relatively limited demands on real-time response, but everything I write really wants to be cross-platform, because I’ve got specialised users on pretty every common platform and I have limited time and funding. (I understand that cross-platform apps are often inferior to single-platform apps, but they’re better than no apps.)

Xamarin doesn’t quite meet my expectations because it’s not really a cross-platform framework in the manner of Qt (which I use) or JUCE (which is widely used by others in my field). Instead of providing a common “widget set” across all platforms, Xamarin provides a separate thin interface to the native UI logic for each platform. It’s hard to judge how much more work this is, without knowing where the abstraction boundaries lie, but it may be a more relevant and sensible distinction on mobile platforms (where the differences are often in interaction and layout) than desktops (where the differences are mostly about how large numbers of individual widgets look).

An ideal combination of language and framework for me goes something like

  • strongly-typed, mostly functional, mostly immutable data structures
  • efficient unboxed support for floating-point vector types, including SIMD support
  • simple syntax (SML is nice)
  • low-cost foreign-function interface for C integration
  • high-level approach to multithreading
  • can work with gross UI layout in HTML5 (possibly DOM-update reactive UI style?)
  • good libraries for e.g. audio file I/O, signal processing, matrix algebra
  • can develop on Linux and deploy to all of Linux, Windows, OS/X, iOS, Android
  • free (or cheap, for proprietary apps) and open source (for open source apps)
  • has indenting Emacs mode

Where F# appears to score

F#, Microsoft’s ML-derived functional language for the .NET CLR, hits several of these. It has the typing, mostly-functional style, syntax, FFI, multithreading, libraries, deployment and licensing, and potentially the development platform (if the open source Xamarin framework should lead to the ability to build mobile apps directly from Linux).

I’m not sure about floating-point and vectors or about reusable HTML-style UI. I’d like to make the time to do another comparison of some ML-family languages, focusing on DSP-style float activity and on threading. I’ve done a bit of related work in Standard ML, which I could use as a basis for comparison.

Unless and until I get to do that, I’d love to hear any thoughts about F# as a general-purpose DSP-and-UI language, for a developer whose home platform is Linux.

My impression from the feedback on my earlier post was that the F# community is both enthusiastic and polite, and I notice that F# is the third most-loved language in the StackOverflow’s 2016 survey. Imagine a language that is useful no matter what platform you’re targeting, and whose developers love it. I can hope.


Fold: at the limit of comprehension

Fold” is a programming concept, a common name for a particular higher-order function that is widely used in functional programming languages. It’s a fairly simple thing, but in practice I think of it as representing the outer limit of concepts a normal programmer can reasonably be expected to grasp in day-to-day work.

What is fold? Fold is an elementary function for situations where you need to keep a tally of things. If you have a list of numbers and you want to tally them up in some way, for example to add them together, fold will do that.

Fold is also good at transforming sequences of things, and it can be used to reverse a list or modify each element of a sequence.

Fold is a useful fundamental function, and it’s widely used. I like using it. I just scanned about 440,000 lines of code (my own and other people’s) in ML-family languages and found about 14,000 that either called or defined a fold function.

Let me try to describe fold more precisely in English: It acts upon some sort of iterable object or container. It takes another function as an argument, one that the caller provides, and it calls that function repeatedly, providing it with one of the elements of the container each time, in order, as well as some sort of accumulator value. That function is expected to return an updated version of the accumulator each time it’s called, and that updated version gets passed in to the next call. Having called that function for every element, fold then returns the final value of the accumulator.

I tried, but I think that’s quite hard to follow. Examples are easier. Let’s add a list of numbers in Standard ML, by folding with the “+” function and an accumulator that starts at zero.

> val numbers = [1,2,3,4,5];
val numbers = [1, 2, 3, 4, 5]: int list
> foldl (op+) 0 numbers;
val it = 15: int

What’s difficult about fold?

  1. Fold is conceptually tricky because it’s such a general higher-order function. It captures a simple procedure that is common to a lot of actions that we are used to thinking of as distinct. For example, it can be used to add up a list of numbers, reverse a list of strings, increase all of the numbers in a sequence, calculate a ranking score for the set of webpages containing a search term, etc. These aren’t things that we habitually think of as similar actions, other than that they happen to involve a list or set of something. Especially, we aren’t used to giving a name to the general procedure involved and then treating individual activities of that type as specialisations of it. This is often a problem with higher-order functions (and let’s not go into monads).
  2. Fold is syntactically tricky, and its function type is confusing because there is no obvious logic determining either the order of arguments given to fold or the order of arguments accepted by the function you pass to it. I must have written hundreds of calls to fold, but I still hesitate each time to recall which order the arguments go in. Not surprising, since the argument order for the callback function differs between different languages’ libraries: some take the accumulator first and value second, others the other way around.
  3. Fold has several different names (some languages and libraries call it reduce, or inject) and none of them suggests any common English word for any of the actions it is actually used for. I suppose that’s because of point 1: we don’t name the general procedure. Fold is perhaps a marginally worse name than reduce or inject, but it’s still probably the most common.
  4. There’s more than one version of fold. Verity Stob cheekily asks “Do you fold to left or to the right? Do not provide too much information.” Left and right fold differ in the order in which they iterate through the container, so they usually produce different results, but there can also be profound differences between them in terms of performance and computability, especially when using lazy evaluation. This means you probably do have to know which is which. (See footnote below.)

A post about fold by James Hague a few years ago asked, “Is the difficulty many programmers have in grasping functional programming inherent in the basic concept of non-destructively operating on values, or is it in the popular abstractions that have been built-up to describe functional programming?” In this case I think it’s both. Fold is a good example of syntax failing us, and I think it’s also inherently a difficult abstraction to recognise (i.e. to spot the function application common to each activity). Fold is a fundamental operation in much of functional programming, but it doesn’t really feel like one because the abstraction is not comfortable. But besides that, many of the things fold is useful for are things that we would usually visualise in destructive terms: update the tally, push something onto the front of the list.

In Python the fold function (which Python calls reduce) was dropped from the built-in functions and moved into a separate module for Python 3. Guido van Rossum wrote, “apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what’s actually being fed into that function before I understand what the reduce() is supposed to do.” Instead the Python style for these activities usually involves destructively updating the accumulator.

Functional programming will surely never be really mainstream so long as fold appears in basic tutorials for it. Though in practice at least, because it’s such a general function, it can often be usefully hidden behind a more discoverable domain-specific API.


(Footnote. You can tell whether an implementation of fold is a left or right fold by applying it to the list “cons” function, which is often called “::”. If this reverses a list passed to it, you have a left fold. For example, the language Yeti has a function simply called fold; which is it? —

> fold (flip (::)) [] [1,2,3,4];
[4,3,2,1] is list<number>

So it’s a left fold.)


Zero-based indexing

The excellent Greg Wilson, founder of Software Carpentry, tweeted the above link to a 2013 blog post by Mike Hoye the other day.

I didn’t comment on this article when it first appeared because I didn’t have the nerve to confront its author, who was shouting down everyone who tried to discuss it in the comments. But I can’t bear to see this article promoted again, and by a good authority.

The article claims that the reason most of today’s programming languages use zero-based indexing (i.e. they count array indexes from 0, so that arr[0] is the first element of array arr, rather than arr[1]) is because it saved a tiny amount of compile time (not run time), and that this mattered because on a specific IBM mainframe hosted by MIT in the 70s there was a danger that a job taking too long to compile might be bumped in order to make way for a program to calculate handicap points for yacht racing.

This is a pretty implausible suggestion, so it needs some pretty good evidence. That isn’t there. The article has some very nice sources, but the quotes from them just don’t support the proposition they’re being asked to support. The main quotes, from Martin Richards and Tom Van Vleck, both appear to say nearly the opposite of the things they’re described as saying. There’s plenty of room for nuance in interpreting in what people say, but the author accepts no nuance in anyone’s responses to the article, choosing instead to mock and ridicule anyone who doesn’t agree with him. There’s no citation for the one thing that is necessary to make the argument hold together (that indexes were calculated at compile time rather than run time). Reading this article carefully, the only conclusion I can draw is that the choice of 0-based indexing almost certainly has nothing to do with yachts.

I don’t mind a great deal whether a programming language uses 0-based or 1-based indexing. The reason this matters to me is because the article is not just a screed with a funny story in it, but a call for rigour in understanding the history of programming languages, something I do care about and that its author appears to take very seriously indeed. Its general principle is really sound — we get used to a lot of arbitrary aspects of languages and then explain them as the mythology of the elders, rather than finding the actual reasons. But this article only added to the mythology, and people who know better are now citing it as if it had been established to be true, which it almost certainly isn’t.

(I feel really bad just writing this. It’s quite possible the author is regretting ever getting involved in this stupid topic but has too much integrity to take down or edit the post. I wish I had never been reminded of how maddening I found it.)



Here’s a playlist of good David Bowie songs that I had never heard until after he died last week.

Spotify playlist
YouTube links:
Dead Against It (1993)
Up The Hill Backwards (1980)
Move On (1979)
Dancing With The Big Boys (1984) (with Iggy Pop)
I Would Be Your Slave (2002)
Girl Loves Me (2016)
You’ve Been Around [Dangers 12″] (1993) (Jack Dangers remix)
Nite Flights (1993) (Scott Walker cover)
No Control (1995)
Bring Me The Disco King (2003)
I’m Deranged (1995)
5:15 The Angels Have Gone (2002)

Most of these came out after the peak of his popularity, but they aren’t obscure at all — I was just never a fan.

The first Bowie songs I remember hearing on the radio were Modern Love and Let’s Dance, both released in 1983 when I was eleven. I thought those two were fine, though they weren’t the sort of thing I liked to think I was into at the time. (I had a big Motörhead war-pig patch on my little denim jacket. Lemmy’s death was also the end of an era.)

A few years later, a cousin introduced me to some of the Spiders from Mars period songs like Rebel Rebel and Suffragette City. I was a bit puzzled because I thought I knew Bowie as a smooth, modern 80s-sounding chap. But I didn’t get the appeal either: too much like everything else from the early 70s. Rebel Rebel even sounded like the Stones, which was definitely my dad’s music.

Back in the real timeline of the 80s, Bowie was releasing Never Let Me Down, an album seen everywhere (one of several awful record covers) but seldom played, then launching the drearily adult Tin Machine.

His next album, Black Tie White Noise, didn’t come out until 1993, when I was briefly in Berlin as a student and mostly listening to industrial music and obscure things I read about in Usenet groups. If I had been aware that David Bowie had an album out, I would certainly have ignored it. By the time of 1997’s Earthling, a jungle-influenced album released a whole two years after peak jungle with a dreadful Cool Britannia cover, it felt socially impossible to admit to liking a Bowie song ever again. And that was pretty much the end of that.

There’s been a David Bowie album, collaboration, tour, or retrospective for almost every year of my life, and I’ve never taken more than a passing interest in any of them.

I was taken by surprise, then, by how emotional I felt about his death.


What did eventually make me notice David Bowie as a figure was the connection with Iggy Pop. I think Iggy is brilliant, and I’d been a bit of a fan for a while before I eventually twigged what it was that his most interesting stuff had in common. That made me aware of the famously dramatic and productive spell for those two in Berlin the late 70s (the only albums of Bowie’s that I ever actually bought are from this period) and also an opening to a bit of a web of interesting collaborations and influences.

(Going back during the past week and filling in a lot of the songs of Bowie’s that I’ve missed during the last few decades, it’s been particularly fun to hear Iggy Pop numbers, er, pop up all over the place. China Girl — always an Iggy song to me — is well known, but there are at least three other albums that recycle songs previously recorded by him, including a straight cover of the flop lead single from Iggy’s most foolish album. A sustained friendship.)


So something of the emotion for me has to do with all that Berlin stuff. There are two aspects to that. One is the grubbily romantic idea of “pressure-cooker” West Berlin, seen from a distance as a place of hideouts, drugs, spying, longing, separation, and any other melodrama that “we” could project onto it. I’m sure this version of the city was overstated for lyrical purposes, but it probably did exist to a degree. The Berlin that fascinated and frightened me in 1993 was already a very different city, and both versions are hardly visible in today’s shiny metropolis.

The other aspect is the notion that moving to a different town in a different country could give you a new life and make your past disappear, even for someone already so celebrated — that it could really be so simple. What makes that idea available here is that Bowie didn’t just go, but then produced such different work after going that it really could appear as if his past had not gone with him.

This impression of self-effacement alongside all the self-promotion, the ability to erase the past, is a very attractive one for a pop star, and it fits also with the amount of collaborative work Bowie did. From some of the videos you can imagine that he was never happier than when playing keyboards or doing tour production for Iggy, singing backing vocals in a one-off with Pink Floyd, or playing second fiddle to Freddie Mercury or Mick Jagger.

Perl 6

I see the official release of the Perl 6 language specification happened on Christmas day.

The first piece of commercial web development I did was in Perl 5. A lot of people can probably say the same thing. This one was a content-management system led by James Elson in 1999 at PSWeb Ltd, a small agency in Farringdon that renamed itself to Citria and expanded rapidly during 1999-2001 before deflating even more rapidly when the dotcom bust arrived.

My recollection was that this particular CMS only ever had one customer, the BBC, who used it only for their very small Digital Radio site. But I still have a copy of the code and on inspection it turns out to have some comments that must have been added during a later project, so perhaps it did get deployed elsewhere. It was a neat, unambitious system (that’s a good thing, James is a tasteful guy) that presented a dynamic inline-editing blocks-based admin interface on a backend URL while generating static pages at the front end.

I remember there was an open question, for a time, about whether the company should pursue a product strategy and make this first CMS, or something like it, the basis of its business, or else take up a project strategy and use whatever technology from whichever provider seemed most appropriate for each client. The latter approach won out. It’s interesting to speculate about the other option.

(I like to imagine that the release of Perl 6 is sparking tiresome reminiscences like this from ageing programmers across the world.)

Perl 6 looks like an interesting language. (It’s a different language from Perl 5, not compatible with it.) The great strength of Perl was of course its text-processing capacity, and for all the fun/cute/radically-unmaintainable syntax showcased on the Perl 6 site, it’s clear that that’s still a big focus. Perl 6 appears to be the first language to attempt to do the right thing with Unicode from the ground up: that is, to represent text as Unicode graphemes, rather than byte strings (like C/C++), awkward UTF-16 sequences (Java), or ISO-10646 codepoint sequences (Python 3). This could, in principle, mean your ad-hoc botched-together text processing script in Perl 6 has a better chance of working reliably across multilingual inputs than it would in any other language. Since plenty of ad-hoc botched-together text processing still goes on in companies around the world, that has the feel of a good thing.