Release Week ahead

I’ve built up quite a pile of almost-released new versions of software, in the lab at C4DM. It’s time to get some of it released, and I’m planning to spend the next week or two doing exactly that. This means spending a lot of time getting cross about platform builds not working right (that’s where the term “cross-platform” comes from you see).

This sort of work always calls for endless to-do lists; I thought this time I’d write up a list here, so I can talk a bit about the work we’ve been doing that I want to release. I’m hoping to get much of this done before the Digital Music Research Network meeting on the 17th December.

Vamp Plugins: MIREX edition

We entered a number of Vamp plugin implementations of audio analysis methods into the annual MIREX evaluation—I’ll write about that separately. But now we need to make properly packaged releases of them:

  • one submission, Segmentino from Matthias Mauch, has not yet been published at all, and I think we’d like to see a release of it go out;
  • one submission, the BeatRoot Vamp Plugin, has been sitting in “testing” status for a few years on our code site: MIREX gave us the impetus to finish it up, and now that it’s been through that evaluation as well, it’s surely ready to release;
  • the other submissions were from the standard QM Vamp Plugins set which has been publicly available for years, but we found a few bugs and had to make some small changes to suit the output formats needed for MIREX: we should make a release that corresponds exactly to the MIREX results as published.

Sonic Visualiser

We have a number of fixes (e.g. save window size correctly when maximised) and a couple of small new features (e.g. export audio as data) since the last release. I’d like to deal with a couple more of the outstanding bugs as well.

Sonic Visualiser “IM AF build”

A project headed by Panos Kudumakis at C4DM recently added support for loading MPEG-A Interactive Music Application Format files to Sonic Visualiser. This isn’t ready to go into mainstream builds yet, but we’d like to get a preview build released from the IM AF branch.

Klapuri Constant-Q Plugin

I’ve been working on a (causal) C++ implementation of this constant-Q transform method from Christian Schörkhuber and Anssi Klapuri.

(The constant-Q transform, like the FFT, transforms a sampled time-domain signal into a set of frequency components. But while the FFT produces output bins at constant frequency increments, the constant-Q transform produces them at constant log-frequency increments so that every bin has identical ratio of centre frequency to bandwidth—that ratio is known as Q factor, hence constant-Q. This means output bins are effectively spaced evenly in terms of musical pitch, which is handy.)

Anyway, I’d like to get this released as a library and plugin, and then use it in some other work (still pending).

Slightly longer-term…

“Tonioni”

A mysterious project involving interactive editing of pitch estimations. This is something we’re working on for at least an internal release, and it might end up being public too. We’ll see.

“Sonic Vector”

This program (you can find source here) has been around for ages. It needs brushing up and making properly available, though even a rough release would be more useful than none. It’s a comparative viewer for multiple audio recordings, most relevant where those recordings represent different versions of the same underlying music.

 

Repairs

a dishwasher, todayWaiting for a new hinge for the dishwasher door to be delivered.

We got this thing in 2002. At the end of 2009 it started cutting out and we got a repairman in to fix it. He came twice, failed to diagnose the problem, changed the motherboard, quickly gave up. We decided to replace the dishwasher, but I thought I should have a look inside it first.

Turned out the bottom of the machine, under the tub, was full of water. It had sprung a leak, and simply cut out because the leak detector had tripped. I got in touch with a nice person at a parts company who forwarded me the schematic for the dishwasher, ordered a new tiny O-ring for 69p to fix the leak, and it worked again.

Obvious lesson: replacing something like an integrated dishwasher is a pain in the arse anyway, so you might as well look inside it first and see if there’s anything visibly wrong. There’s something amazingly satisfying about getting a “free” working dishwasher by fixing it yourself, especially if, like me, you’re not a very practical chap and wouldn’t normally expect to be able to. Admittedly you do have to spend a few evenings with a dishwasher in bits in the middle of your kitchen floor.

In 2012 the door lock broke. Inspired by my earlier success, I took apart the door, found the bit I needed, bought a new one, and got it working again. Now part of the hinge has sheared through, so that once you open the door fully you can’t close it again. I’m not very good with hinges—or anything where a lot of force is involved—and not sure I know how to replace this one, but the part’s on the way, so I’ll give it a go. This machine washes perfectly well, and I like the thought of being able to avoid scrapping it.

In other pointless domestic news, I’m about to run out of this year’s marmalade and we’re still almost two months from the oranges being back in season.

The extraordinary success of git(hub)

The previous post, How I developed my git aversion, talked about things that happened in the middle of 2007.

That was nearly a year before the launch of github, which launched publicly in April 2008. I know that because I just looked it up. I’m not sure I would have believed it otherwise: git without github seems like an alien idea.

Still, it must be true that github didn’t exist then, because it would have solved the two problems that I had with git. It answers the question of where to push to, when you’re using a random collection of computers that aren’t always on; and it provides the community of people you can ask questions of when you find yourself baffled.

And that community is? All developers. Or at least, all those who ever work in the open.

The amazing success of github—and it is facilitated by the architecture of git, if not the syntax of its tools—is to produce a public use of version control software that is completely out of proportion to the number of developers who ever cared about it before.

That’s because github has so many users who are not classic “software developers”, but I suspect that it’s also because so many software developers would never otherwise use version control at all. I can’t believe that very many of github’s current users are there for the version control. They’re there for lightweight and easy code sharing. Version control is a happy accident, a side-effect of using this social site for your code.

I still don’t really use github myself, partly because I don’t really use git and partly because of a social network antipathy. (I don’t use Facebook or Google+ either.) But it’s a truly extraordinary thing that they’ve done.

How I developed my git aversion

In the summer of 2007, I switched some of my personal coding projects from the Subversion version control system to git.

Git was especially appealing because the network of computers I regularly worked on was quite flat. I did some work on laptops and some on desktops at home and in the office, but for projects like these I didn’t have an always-on central server that I could rely on pushing to, as one really needs to have with Subversion.

So even though I was working alone, I was eager for a distributed, peer-to-peer process: commit, commit, commit on one machine; push to the other machine or a temporary staging post online; pick up later when I’m at the keyboard of the other machine.

Git wasn’t the first distributed version control system I’d had installed—that would be darcs—but it was the first that looked like it might have a popular future, and the first one I was excited about being able to use.

My excitement lasted about a week, and then I lost some code.

I lost it quite hard: I knew I’d written it, I knew I’d committed it, and I knew I’d pushed it somewhere other than the machine I’d written it on, although I couldn’t remember which machine that had originally been. But I couldn’t find it anywhere.

It wasn’t visible on the machine I’d pushed it to, and I couldn’t find it on any of the machines I might have pushed from. In fact, I never did find it. I’d managed to get my code enfolded into the system so that I could no longer get back to where I’d left it. I didn’t know anyone else who used git at the time, to ask for help. I’d fallen for a program that was cleverer than me and that wasn’t afraid to show it.

And as a long-time user of centralised version control systems, the idea of losing code after you checked it in was really a bit shocking. That shouldn’t ever happen, no matter how dumb you are.

So I went back to Subversion. Technically-better is not always better.

The reason for my confusion was that in my excitement I’d been imagining I could freely do peer-to-peer pushes and end up with the same repository state at both ends—something any fool could tell you is just not the way it works. (I probably lost the code by doing a push to a bare repository, something that is now harder to do by accident.)

As it happens, though, the way I had imagined it would work… is the way it works with Mercurial.

So when I found Mercurial I became happy again, and I’ve been happily using that ever since. Of course git has become so popular that you can’t really avoid it, and I know it well enough now that I wouldn’t make those mistakes again. Still, how much happier to use a system that actually does work the way you expect it to.

Rules of thumb for functional APIs

I’ve been trying to get to grips with what makes an API clean and pleasing to use in a functional programming language. (In my case this language has been Yeti, an attractive language that uses the Java virtual machine.)

Here are some notes, for my own reference as much as anyone’s. Some of this may be particular to Yeti, but most of it is (hopefully, if I have things right) going to be obvious stuff to any functional programmer. Please leave a comment if you have any more or better suggestions!

Avoid mutable state where practical

  • It’s easier to reason correctly about the behaviour of a series of functions that each take an object and return a new object derived from it, leaving the original one unaffected, than a series of functions that can each change some hidden state in the object and so affect the behaviour of all subsequent functions in an invisible way.
  • This is the nub of practical functional programming: pure functions—functions without hidden state—are predictable, testable, and easier to understand within a wider system.
  • Objects with mutable state should be limited to things that really do have some conceptual internal state, such as database handles or streams.
  • So where in an object-oriented language you may have a class with internal state plus a set of methods that act on it, organise this as a module in which the functions (named somewhat like methods) accept state and return some new state. The state is most likely a struct, maybe with getters but not setters.

Give functions names relative to their modules

  • A Yeti module is a file whose top-level code evaluates to something. Typically it contains one or more bindings (function declarations usually) which are returned within a struct at the end of the file’s top-level code.
  • (Modules can be loaded either with a plain load expression or in a binding: load my.module versus m = load my.module. In the first case a function func within the module would be referred to after loading simply as func; in the second, it would be m.func.)
  • I think it’s best to expect that everyone will be using the second form to load your module, and to name functions so that they make sense when they have the module name immediately before them. You don’t need to worry about name collisions with other modules or the standard library. As a programmer used to object languages, I think this helps when structuring code so as to avoid too much mutable state, because it means the module can take on the function of namespacing that would be carried out by the object class.

Distinguish between curried arguments and other ways of packaging

  • The obvious way to write a function that takes more than one argument in Yeti is using what are known as “curried” arguments, with syntax: f a b = a + b
  • This allows partial application: with two arguments f 2 3 is 5, but with only one, f 2 makes a new function that takes another argument and adds 2 to it.
  • Curried arguments are useful where callers might actually want to bind the first argument and then reuse the function. As an extreme example, one might introduce a second argument for a function that in theory only needs one, like an FFT: a function declared fft size data is redundant because size can be queried from the data array, but it allows the FFT tables to be precomputed when the first argument is bound. So fft 1024, leaving the second argument unbound, becomes a bit like an object constructor.
  • But it’s easy to get used to the idea that this is just how multiple arguments are passed. Many functions won’t have callers that want to do partial application and won’t benefit from knowing some arguments before the rest. And the disadvantage of curried arguments is that the caller needs to remember what order they appear in.
  • So, functions that take a set of related arguments at once, and can’t benefit from knowing one argument in advance of the rest, should accept them as named values in a struct instead. It makes for a more discoverable API.

A quick update on Firefox OS

A couple of months ago I wrote about having bought a Geeksphone Keon, one of the early developer devices for FirefoxOS. I haven’t done much—all right, any—developing with it, but I have continued to use it and update it occasionally on the Firefox 1.2 developer track.

Some of the changes so far:

  • Navigation and browsing have got quite a bit faster, kinetic scrolling is improved, and the on-screen keyboard has become more reliable. There’s evidently been a lot of tuning going on. As a pure web-browsing experience, this device is now really nice.
  • I wrote, “Anyone know what audio recording and playback latencies are like?” — well, it turned out that audio capture was not supported at all in the device as shipped. Support is now appearing in the Gecko 26 release branch which Firefox OS 1.2 will be based on, and basic audio input works on my device now.
  • Strangely, the on-screen keyboard has changed from showing a mixture of caps and lower case (i.e. lower case on each key until you hit Shift, then switching to caps), as on Android devices, to showing only caps as on iOS. I wonder why?
  • The 1.2 track isn’t all that reliable at the moment. For example the email client doesn’t work on my device, though that doesn’t actually bother me because the Fastmail browser interface works very well on it. Screen rotation seems to be taking a holiday, and the notifications pulldown doesn’t always want to go away when I ask it to. Very interesting to keep an eye on though.

 

How the Lenovo Yoga 11s compares

Previously… I was after a new small laptop and wasn’t sure what sort to get. I bought a Lenovo Yoga 11s (in grey). After a few days’ use, here’s how it compares against the criteria I had in mind when I bought it.

  • No bigger in any dimension than an A4 pad. The Yoga is just smaller than A4: about the same size as my previous Dell, but thinner and a bit lighter. It is slightly bigger and heavier than an 11″ MacBook Air. It’s the right size for a small laptop.
  • Good keyboard. It’s not as good as the bigger Lenovo laptops and doesn’t compare with my older Thinkpads, but it is nice in comparison to most other laptops this size, including the MacBook Air, and is better than the shiny Chromebook Pixel keyboard. It does have similarly spongy cursor keys to the Air though. Trackpad wasn’t a factor for me, but it’s fairly good: better than the glassy pad on the Air, but it would be better still with separate buttons.
  • Touchscreen with a decent screen resolution, i.e. not 1366×768. Failed here; this one is 1366×768. It’s fine when running Linux or old-school programs like Visual Studio, but “native” Windows 8 apps don’t do subpixel antialiased font rendering any more, so things look rather fuzzy there (just like OS/X on the Air in fact).
  • Should ship with Windows 8 but be able to dual-boot with a Linux install, run virtual machines, etc. Yes, fine here.
  • Quiet fan, no whining. I was worried about this—owners of the earlier Yoga 13 models have reported a nasty whiny fan noise. Sounds like a trivial thing, but it really matters. To my joy, the fan on this 11s is almost inaudible.
  • Comfortable ergonomics, plain appearance (ideally not silver). The ergonomics are generally good, except that you can’t open it one-handed. The palmrests are particularly lovely. It looks unostentatious enough. It is silver, but a pretty mundane silver plastic with black keyboard and bezel. I wanted boring, which is lucky because boring is what it is.
  • To cost under £1000. It was £700.

Things I didn’t think of beforehand:

  • The screen is a bit wibbly-wobbly. I don’t really want the flip-back hinge: I bought this as a nice small laptop rather than a convertible, and trying to use it in “tablet mode” just has the effect of making a nice small laptop look like a clumsy ponderous oaf of a tablet—not a good look. I do like the way the screen hinges right back to the table top, but I’m not sure it’s valuable enough to justify a bit of extra wobble in the hinge.
  • It has no Kensington lock slot. That’s a pisser because it means I can’t leave it alone in the lab during the day. I don’t work in a very secure place. I know the MacBook Airs don’t have them, but I thought that was just Apple being up their own arses. Hadn’t expected it of Lenovo.
  • Battery life (about 5 hours in my work) isn’t the best, but it’s acceptable and the machine recharges really fast.
  • The touchscreen isn’t as oil-resistant as some tablets and can get smeary pretty quickly. And it’s very reflective, so that matters.

And things I thought of but was nonetheless surprised by:

  • Processor speed. I said this wasn’t a factor, but I’m surprised to find that a current low-voltage Core i3 is much slower at compiling code than the 32-bit Core Duo in my chunky 6-year-old Thinkpad T60p. The Yoga is much faster at media work, like photo or audio editing, but it takes about twice as long to compile anything. I haven’t seen recent changes in processor evolution illustrated so clearly before.
  • Windows 8: good in many ways and good enough at the system level, but the built-in apps (Photos etc) really are still horribly unreliable. These apps, including Internet Explorer, seem to take the view that if a connection takes more than a few seconds they should just crash and let the user restart instead of having to wait. Not a great advert for those robust new Windows 8 development frameworks.

I must admit that, although I like this machine, I do think of it as an early iteration of a design I hope Lenovo will keep working on. It’s very nice, but a version with a higher-resolution, more oil-resistant screen and longer battery life from the newer lower-power Intel CPUs would be nicer still.