I wrote a small command-line text processing program in four different ML-derived languages, to try to get a feel for how they compare in terms of syntax, library, and build-run cycles.
ML is a family of functional programming languages that have grown up during the past 40 years and more, with strong static typing, type inference, and eager evaluation. I tried out Standard ML, OCaml, Yeti, and F#, all compiling and running from a shell prompt on Linux.
The job was to write a utility that:
- accepts the name of a CSV (comma-separated values) file as a command-line argument
- reads all the lines from that file, each consisting of the same number of numeric columns
- sums each column and prints out a single CSV line with the results
- handles large inputs
- fails if it finds a non-numeric column value or an inconsistent number of columns across lines (an uncaught exception is acceptable)
A toy exercise, but one that touches on file I/O, library support, string processing and numeric type conversion, error handling, and the build-invocation cycle.
I tested on a random Big Data CSV file that I had to hand; running the
wc (word count) utility on it gives the size and a plausible lower bound for our program’s runtime:
$ time wc big-data.csv 337024 337024 315322496 big-data.csv real 0m3.086s user 0m3.050s sys 0m0.037s $
I’ve included timings throughout because I thought a couple of them were interesting, but they don’t tell us much except that none of the languages performed badly (with the slowest taking about 16 seconds on this file).
Finally I wrote the same thing in Python as well for comparison.
Practical disclaimer: If you actually have a CSV file you want to do things like this with, don’t use any of these languages. Do it with R instead, where this exercise takes three lines including file I/O. Or at least use an existing CSV-mangling library.
Here are the programs I ended up with, and my impressions.
(Note that although I haven’t included any type annotations, like all ML variants this is statically typed and the compiler enforces type consistency. There are no runtime type errors.)
This is the first SML program I’ve written since 23 years ago. I enjoyed writing it, even though it’s longer than I’d hoped. The Basis library doesn’t offer a whole lot, but it’s nicely structured and easy to understand. To my eye the syntax is fairly clear. I had some minor problems getting the syntax right first time—I kept itching to add
end or semicolons in unnecessary places—but once written, it worked, and my first attempt was fine with very large input files.
I had fun messing around with a few different function compositions before settling on the one above, which takes the view that, since summing up a list is habitually expressed in functional languages as an application of
fold, we could start with a function to apply a fold over the sequence of lines in a file.
More abstractly, there’s something delightful about writing a language with a small syntax that was fixed and standardised 18 years ago and that has more than one conforming implementation to choose from. C++ programmers (like me) have spent much of those 18 years worrying about which bits of which sprawling standards are available in which compiler. And let’s not talk about the lifespans of web development frameworks.
To build and run it I used the MLton native-code compiler:
$ time mlton -output sum-sml sum.sml real 0m2.295s user 0m2.160s sys 0m0.103s $ time ./sum-sml big-data.csv 150.595368855,68.9467923856,[...] real 0m16.383s user 0m16.370s sys 0m0.027s $
The executable was a 336K native binary with dependencies on libm, libgmp, and libc. Although the compiler has a good reputation, this was (spoiler alert!) the slowest of these language examples both to build and to run. I also tried the PolyML compiler, with which it took less than a tenth of a second to compile but 26 seconds to run, and Moscow ML, which was also fast to compile but much slower to run.
OCaml is a more recent language, from the same root but with a more freewheeling style. It seems to have more library support than SML and, almost certainly, more users. I started taking an interest in it recently because of its use in the Mirage OS unikernel project—but of these examples it’s the one in which I’m least confident in my ability to write idiomatic code.
I’m in two minds about this code. I don’t much like the way it looks and reads. Syntax-wise there are an awful lot of
lets; I prefer the way SML uses
fun for top-level function declarations and saves
let for scoped bindings. OCaml has a more extensive but scruffier library than SML and although there’s lots of documentation, I didn’t find it all that simple to navigate—as a result I’m not sure I’m using the most suitable tools here. There is probably a shorter simpler way. And my first attempt didn’t work for long files: caught out by the fact that
input_line throws an exception at end of file (ugh), I broke tail-recursion optimisation by adding an exception handler.
On the other hand, writing this after the SML and Yeti versions, I found it very easy to write syntax that worked, even when I wasn’t quite clear in my head what the syntax was supposed to look like. (At one point I started to worry that the compiler wasn’t working, because it took no time to run and printed no errors.)
I didn’t spot at first that OCaml ships with separate bytecode and optimising native-code compilers, so my first tests seemed a bit slow. In fact it was very fast indeed:
$ time ocamlopt -o sum-ocaml str.cmxa sum.ml real 0m0.073s user 0m0.063s sys 0m0.003s $ time ./sum-ocaml big-data.csv 150.595368855,68.9467923856,[...] real 0m7.761s user 0m7.740s sys 0m0.027s $
The OCaml native binary was 339K and depended only on libm, libdl, and libc.
I love Yeti’s dismissive approach to function and binding declaration syntax—no
fun keywords at all. Psychologically, this is great when you’re staring at an empty REPL prompt trying to decide where to start: no syntax to forget, the first thing you need to type is whatever it is that you want your function to produce.
The disadvantage of losing
fun is that Yeti needs semicolons to separate bindings. It also makes for a visually rather irregular source file.
As OCaml is like a pragmatic SML, so Yeti seems like a pragmatic OCaml. It provides some useful tools for a task like this one. Although the language is eagerly evaluated, lazy lists have language support and are interchangeable with standard lists, so the standard library can expose the lines of a text file as a lazy list making a fold over it very straightforward. The default
map2 functions produce lazy lists.
Unfortunately, this nice feature then bit me on the bottom in my first draft, as the use of a lazy
map2 in line 6 blew the stack with large inputs (why? not completely sure yet). The standard library has an eager
map as well as a lazy one but lacks an eager
map2, so I fixed this by converting the number row to an array (arguably the more natural type for it).
The Yeti compiler runs very quickly and compiles to Java .class files. With a small program like this, I would usually just invoke it and have the compiler build and run it in one go:
$ time yc ./sum.yeti big-data.csv 150.59536885458684,68.9467923856445,[...] real 0m14.440s user 0m26.867s sys 0m0.423s $
Those timings are interesting, because this is the only example to use more than one processor—the JVM uses a second thread for garbage collection. So it took more time than the MLton binary, but finished quicker…
F♯ is an ML-style language developed at Microsoft and subsequently open-sourced, with a substantial level of integration with the .NET platform and libraries.
F♯ also has language support for lazy lists, but with different syntax (they’re called sequences) and providing a Python-style
yield keyword to generate them via continuations. The sequence generator here came from one of the example tutorials.
A lot of real F♯ code looks like it’s mostly plugging together .NET calls, and there are a lot of capital letters going around, but the basic functional syntax is almost exactly OCaml. It’s interesting that the fundamental unit of text output seems to be the formatted print (
printfn). I gather F♯ programmers are fond of their
|> operator, so I threw in one of those.
I’m running Linux so I used the open source edition of the F♯ compiler:
$ time fsharpc -o sum-fs.exe sum.fs F# Compiler for F# 3.1 (Open Source Edition) Freely distributed under the Apache 2.0 Open Source License real 0m2.115s user 0m2.037s sys 0m0.063s $ time ./sum-fs.exe big-data.csv 150.595368854587,68.9467923856445,[...] real 0m13.944s user 0m13.863s sys 0m0.070s $
The compiler produced a mere 7680-byte .NET assembly, that of course (like Yeti) requires a substantial managed runtime. Performance seems pretty good.
Python is not an ML-like language; I include it just for comparison.
Feels odd having to use the
return keyword again, after using languages in which one just leaves the result at the end of the function.
This is compact and readable. A big difference from the above languages is invisible—it’s dynamically typed, without any compile-time type checking.
To build and run this, I just invoked Python on it:
$ time python ./sum.py ./big-data.csv 150.59536885458684,68.9467923856445,[...] real 0m10.939s user 0m10.853s sys 0m0.060s $
That’s Python 3. Python 2 was about a second faster. I was quite impressed by this result, having expected to suffer from my decision to always return new lists of totals rather than updating the values in them.
Well, it was a fun exercise. Although I’ve written more in these languages than appears here, and read quite a bit about all of them, I’m still pretty ignorant about the library possibilities for most of them, as well as about the object support in OCaml and F♯.
I am naively impressed by the OCaml compiler. For language “feel”, it gave me the least favourable first impression but I can imagine it being pleasant to use daily.
F♯ on Linux proved unexpectedly straightforward (and fast). Could be a nice choice for web and server applications.
I have made small web and server applications using Yeti and enjoyed the experience. Being able to integrate with existing Java code is good, though of course doubly so when the available libraries in the language itself are so limited.
Standard ML has a clarity and simplicity I really like, and I’d still love to try to use it for something serious. It’s just, well, nobody else seems to—I bet quite a lot of people have learned the language as undergrads (as I did) but it doesn’t seem to be the popular choice outside it. Hardly anyone uses Yeti either, but the Java interoperability means you aren’t so dependent on other developers.
Practically speaking, for jobs like this, and where you want to run something yourself or give someone the source, there’s not much here to recommend anything other than Python. Of course I do appreciate both compile-time typechecking and (for some problems) a more functional style, which is why I’m writing this at all.
But the fact that compilers for both SML and OCaml can generate compact and quick native binaries is interesting, and Yeti and F♯ are notable for their engagement with other existing frameworks.
If you’ve any thoughts or suggestions, do leave a comment.