Code

Functional programming and the joy of learning something again

Twenty years ago, as a maths-and-computing undergraduate at the university of Bath, I was introduced to functional programming using the ML language by the excellent Julian Padget. We undergrads were set the traditional assignment of writing a sed-like text processor in ML, and found it first baffling and then, if we were lucky, rather exciting. I got all enthusiastic about functional languages and, like any proper enthusiast, spent a delightful pointless while trying to design a toy language of my own.

Then I went off and got a job as a C++ programmer.

C++ is a practical, useful language, but it’s also complex, verbose, and baroque, with many different schools of practice. I’ve done a fair bit of work in other languages as well (equally boring ones, like Java) but effectively, I’ve spent much of the past two decades simply continuing to learn C++. That’s a bit crazy.

So when messing about with Android recently, I decided I wanted to try to get some of that old sense of joy back. I went looking for a language with the following properties:

  • It should be primarily functional rather than object-oriented
  • It should be strongly-typed, ideally with Hindley-Milner typing (the most exciting thing about ML, for the undergraduate me)
  • It should have a compiler to JVM bytecode, so I could use it directly in Android apps, and should be able to use Java library code
  • It should have a REPL, an interactive evaluation prompt for productive messing around
  • It should be nice to read—it should be obviously a language I wanted to learn, and I was going to be happily guided by personal whim
  • It must be simple enough for the old, stupid me to have a chance of getting to grips with it
  • And, while I wasn’t going to care very much how mainstream it was, it did need to be reasonably complete and reliable.

There are lots of languages out there for the JVM, including quite a few functional ones. Scala and Clojure are the best-known.

Scala (here in Wikipedia) is a multi-paradigm language that, for me, has shades of C++ in that it feels like it’s designed to improve on all sorts of things in Java rather than be something simple of its own. It also looks object-oriented first and functional second; doesn’t prioritise interactive evaluation; and although it has a sophisticated type system, it doesn’t do inference on function parameter types. All in all, it just seemed a bit chunky to me.

Clojure (here in Wikipedia) looks more fun. It has a very vibrant community and seems well-loved. It’s basically a Lisp for the JVM, and I’ve written Lisp before. That’s definitely interactive and functional. But I wasn’t really setting out to find Lisp again.

Yeti

Having sifted through a few other possibilities, the one that really seemed to fit the bill was Yeti.

Yeti is a functional language with Hindley-Milner type inference, for the JVM, with a relatively simple syntax and interoperability with Java, that has an interactive REPL up front. (See the snappy tutorial.) It seems to be basically the work of one programmer, but a programmer with taste.

The syntax of Yeti looks a bit like the way I remember ML—although on reviewing ML, it turned out not to be as similar as I’d thought. Functions are defined and applied with just about the simplest possible syntax, and the language deduces the types of all values except Java objects. The lack of commas in function application syntax makes it obvious how to do partial application, a straightforward way to obtain closures (functions with context).

Here’s a trivial function, an application of it, and a partial application. The lines starting > are my typing, and the others are type deductions returned by the evaluation environment. Comments start //, as in Java.

> f a b = a + b   // f is a function taking two arguments
f is number -> number -> number = <code$f>
> f 2 3           // apply f to two numbers
5 is number
> f 2             // apply f to one number, returning a new function
<yeti.lang.Fun2_> is number -> number
> g = f 2         // assign that function to g
g is number -> number = <yeti.lang.Fun2_>
> g 3             // apply g to the second number
5 is number

So far, so academic toy language. But the more I apply Yeti to practical problems, the more I find it does work as a practical language.

What is challenging, of course, is that every language or language family has its set of idioms for handling everyday problems, and on the whole I simply don’t know those idioms yet in a functional language. This is the first time I’ve really tried to do anything serious with one. I know the language, roughly, but I don’t really speak the language. I’m still leafing through the phrasebook. My hovercraft is still full of eels.

With most of my incredibly stupid questions on the Yeti mailing list—which get very patient responses, but I really do need to cut back on the late-night stupidity and start reviewing my code in the morning instead—the answer turns out to be, “it’s simpler than I thought”. And I’m really enjoying it.

Why type inference?

A footnote. Why did I want a language with type inference?

Well, I’m lazy of course, and one of the most obvious sources of tedium in C++ and Java is having to type everything out over and over again.

And I’m a bit nostalgic about those undergrad days, no doubt.

But also, I’m increasingly mistrustful of my own work. In languages such as Python and Objective-C the concept of duck typing is widely used. This essentially means employing objects on the basis of their supported methods rather than their nominal class (“if it walks like a duck…”). This relaxed approach reaches a bit of a pinnacle in Ruby on Rails, which I’ve been working with a bit recently—and I find the magic and the assumptions go a bit far for me. I like to have some of the reassurance of type checking.

So, type inference gives you—in theory—the best of both worlds. You get to write your code using duck-typing principles, and the compiler proof-reads for you and checks that your types really do work out.

That’s the theory. Does it scale? Not sure. And if it was so great, wouldn’t it have become more mainstream during the past 30 years? Some moderately widely-used languages, like Haskell, use it, but they’re still only moderately widely-used. So, we’ll see.

There are some obvious immediate disadvantages to type inference. Long error messages, for a start.

And as a C++ guy I somewhat miss function overloading and even (sometimes) operator overloading. A function argument can take more than one type, of course—that’s the whole point—but only if the types can be unified in terms of their properties; you can’t just reuse a function name for a second function that has a similar effect but works on unrelated types.

Most of the situations in which I want function overloading can be handled instead using variant types or module namespaces, both of which work well in Yeti, but sometimes it seems to come down to inventing slightly more awkward function names than I’d really like.

5 thoughts on “Functional programming and the joy of learning something again

  1. And conversely: http://www.alsonkemp.com/haskell/reflections-on-leaving-haskell/

    Haskell is a funny one. Back in the day, the rise of Haskell (as opposed to ML) was one of the things that helped dry up my interest in functional programming.

    That’s probably partly to do with tribalism — I suppose I felt I was on the “ML side” — but it certainly has something to do with the language and its advocates as well. With ML you felt like you had a bit of theory to get to grips with, and then you could get doing something novel and interesting with a relatively simple syntax. Haskell already seemed complex enough, with AllThoseTypeClasses and things like monads, to appear as an endless desert of the dry and theoretical. The general promotion of it as the Right Language for Proper Computer Scientists since then hasn’t helped.

    In hindsight, Haskell probably is a good language to get to grips with. Lazy evaluation (not a feature of ML) seems like quite an important thing now in terms of how it influences reasoning about function composition. And a couple of times I’ve found myself with a vague feeling that a problem I’m trying to solve is one of those for which monads were invented… should I go and find out what they’re all about, or continue to bungle through with lots of nested case expressions?

    Even so — I guess I was quite happy to be able to pick up a nice modern functional language that wasn’t Haskell. And Yeti is a bit more relaxed, having variables as well as immutable bindings, a for loop, straightforward I/O etc.

Comments are closed.