MIREX 2015 submissions

For the past three years now, we’ve taken a number of Vamp audio analysis plugins published by the Centre for Digital Music and submitted them to the annual MIREX evaluation. The motivation is to give other methods a baseline to compare against, to compare one year’s evaluation metrics and datasets against the next year’s, and to give our group a bit of visibility. See my posts about this process in 2014 and in 2013.

Here are this year’s outcomes. All these categories are ones we had submitted to before, but I managed to miss a couple of category deadlines last year, so in total we had more categories than in either 2013 or 2014.

Structural Segmentation

Results for the four datasets are here, here, here, and here. This is one of the categories I missed last year and, although I find the evaluations quite hard to understand, it’s clear that the competition has moved on a bit.

Our own submissions, the Segmentino plugin from Matthias Mauch and the much older QM Segmenter from Mark Levy, produced the expected results (identical to 2013 for Segmentino; similar for QM Segmenter, which has a random initialisation step). As before, Segmentino obtains the better scores. There was only one other submission this year, a convolutional neural network based approach from Thomas Grill and Jan Schlüter which (I think) outperformed both of ours by some margin, particularly on the segment boundary measure.

Multiple Fundamental Frequency Estimation and Tracking

Results here and here. In addition to last year’s submission for the note tracking task of this category, this year I also scripted up a submission for the multiple fundamental frequency estimation task. Emmanouil Benetos and I had made some tweaks to the Silvet plugin during the year, and we also submitted a new fast “live mode” version of it. The evaluation also includes a new test dataset this year.

Our updated Silvet plugin scores better than last year’s version in every test they have in common, and the “live mode” version is actually not all that far off, considering that it’s very much written for speed. (Nice to see a report of run times in the results page — Silvet live mode is 15-20 times as fast as the default Silvet mode and more than twice as fast as any other submission.) Emmanouil’s more recent research method does substantially better, but this is still a pleasing result.

This category is an extremely difficult one, and it’s also painfully difficult to get good test data for it. There’s plenty of potential here, but it’s worth noting that a couple of the authors of the best submissions from last year were not represented this year — in particular, if Elowsson and Friberg’s 2014 method had appeared again this year, it looks as if it would still be at the top.

Audio Onset Detection

Results here. Although the top scores haven’t improved since last year, the field has broadened a bit — it’s no longer only Sebastian Böck vs the world. Our two submissions, both venerable methods, are now placed last and second-last.

Oddly, our OnsetsDS submission gets slightly better results than last year despite being the same, deterministic, implementation (indeed exactly the same plugin binary) run on the same dataset. I should probably check this with the task captain.

Audio Beat Tracking

Results here, here, and here. Again the other submissions are moving well ahead and our BeatRoot and QM Tempo Tracker submissions, producing unchanged results from last year and the year before, are now languishing toward the back. (Next year will see BeatRoot’s 15th birthday, by the way.) The top of the leaderboard is largely occupied by a set of neural network based methods from Sebastian Böck and Florian Krebs.

This is a more interesting category than it gets credit for, I think — still improving and still with potential. Some MIREX categories have very simplistic test datasets, but this category introduced an intentionally difficult test set in 2012 and it’s notable that the best new submissions are doing far better here than the older ones. I’m not quite clear on how the evaluation process handles the problem of what the ground truth represents, and I’d love to know what a reasonable upper bound on F-measure might be.

Audio Tempo Estimation

Results here. This is another category I missed last year, but we get the same results for the QM Tempo Tracker as we did in 2013. It still does tolerably well considering its output isn’t well fitted to the evaluation metric (which rewards estimators that produce best and second-best estimates across the whole piece).

The top scorer here is a neural network approach (spotting a theme here?) from Sebastian Böck, just as for beat tracking.

Audio Key Detection

Results here and here. The second dataset is new.

The QM Key Detector gets the same results as last year for the dataset that existed then. It scores much worse on the new dataset, which suggests that may be a more realistic test. Again there were no other submissions in this category — a pity now that it has a second dataset. Does nobody like key estimation? (I realise it’s a problematic task from a musical point of view, but it does have its applications.)

Audio Chord Estimation

Poor results for Chordino because of a bug which I went over at agonising length in my previous post. This problem is now fixed in Chordino v1.1, so hopefully it’ll be back to its earlier standard in 2016!

Some notes

Neural networks

… are widely-used this year. Several categories contained at least one submission whose abstract referred to a convolutional or recurrent neural network or deep learning, and in at least 5 categories I think a neural network method can reasonably be said to have “won” the category. (Yes I know, MIREX isn’t a competition…)

  • Structural segmentation: convolutional NN performed best
  • Beat tracking: NNs all over the place, definitely performing best
  • Tempo estimation: NN performed best
  • Onset detection: NN performed best
  • Multi-F0: no NNs I think, but it does look as if last year’s “deep learning” submission would have performed better than any of this year’s
  • Chord estimation: NNs present, but not yet quite at the top
  • Key detection: no NNs, indeed no other submissions at all

Categories I missed

  • Audio downbeat estimation: I think I just overlooked this one, for the second year in a row. As last year, I should have submitted the QM Bar & Beat Tracker plugin from Matthew Davies.
  • Real-time audio to score alignment: I nearly submitted the MATCH Vamp Plugin for this, but actually it only produces a reverse path (offline alignment) and not a real-time output, even though it’s a real-time-capable method internally.

Other submissions from the Centre for Digital Music

Emmanouil Benetos submitted a well-performing method, mentioned above, in the Multiple Fundamental Frequency Estimation & Tracking category.

Apart from that, there appear to be none.

This feels like a pity — evaluation is always a pain and it’s nice to get someone else to do some of it.

It’s also a pity because several of the plugins I’m submitting are getting a bit old and are falling to the bottom of the results tables. There are very sound reasons for submitting them (though I may drop some of the less well performing categories next year, assuming I do this again) but it would be good if they didn’t constitute the only visibility QM researchers have in MIREX.

Why would this be the case? I don’t really know. The answer presumably must include some or all of

  • not working on music informatics signal-processing research at all
  • working on research that builds on feature extractors, rather than building the feature extractors themselves
  • research not congruent with MIREX tasks (e.g. looking at dynamics or articulations rather than say notes or chords)
  • research uses similar methods but not on mainstream music recordings (e.g. solo singing, animal sounds)
  • state-of-the-art considered good enough
  • lack the background to compete with current methods (e.g. the wave of NNs) and so sticking with progressive enhancements of existing models
  • lack the data to compete with current methods
  • not aware of MIREX
  • not prioritised by supervisor

The last four reasons would be a problem, but the rest might not be. It could really be that MIREX isn’t very relevant to the people in this group at the moment. I’ll have to see what I can find out.

New software releases all around

A few months ago (in February!!) I wrote a post called Unreleased project pile-up that gave a pretty long list of software projects I’d been working on that could benefit from a proper release. It ended: let’s see how many of these I can tidy up & release during the next few weeks. The answer: very few.

During the past couple of weeks I’ve finally managed to make a bit of a dent, crossing off these applications from the list:

along with these earlier in the year:

and one update that wasn’t on the list:

Apart from the Python Vamp host, those all fall into the category of “overdue updates”. I haven’t managed to release as much of the actually new software on the list.

One “overdue update” that keeps getting pushed back is the next release of Sonic Visualiser. This is going to be quite a major release, featuring audio recording (a feature I once swore I would never add), proper support for retina and hi-dpi displays with retina rendering in the view layers and a new set of scalable icons, support for very long audio files (beyond the 32-bit WAV limit), a unit conversion window to help convert between units such as Hz and MIDI pitch, and a few other significant features. There’s a little way to go before release yet though.

Unreleased project pile-up

Several of the software projects I’ve been working on at the Centre for Digital Music are in need of a new release.

I ran some queries on the SoundSoftware code site, where much of my code lives, to find

  • projects I’m a member of that have seen some work (in the form of repository commits) more recently than have seen a file release, and
  • projects I’m a member of that haven’t seen a file release at all

These returned 28 and 100 projects respectively.

I feel I’ve been a bit lax.

These of course include things that aren’t releasable yet, never will be, or don’t need to be packaged up into releases for one reason or another, as well as projects I don’t actively participate in or am not responsible for making releases of. I eventually eliminated about half of the first list and 93% of the second one.

These are the ones that remained—things that could usefully be released and which I am generally responsible for releasing. Let’s see how many of these I can tidy up & release during the next few weeks:

Existing stuff that could do with a new release

New stuff that hasn’t been released yet

Some of these projects may still be marked private, in which case the links won’t work yet, but they are all planned for release so that should change eventually.

MIREX 2014 submissions

Last year, Luís Figueira and I experimentally submitted a batch of audio analysis methods, implemented in Vamp plugins developed over the past few years at the C4DM, to the Music Information Retrieval Evaluation Exchange (MIREX). I found the process interesting and wrote an article about the results.

I wasn’t sure whether to do a repeat submission this year—most of the plugins would be the same—but Simon Dixon persuaded me. The test datasets might change; it might be interesting to see whether results are consistent from one year to the next; and it’s always good to provide one more baseline for other submissions to compare themselves against. So I dusted off last year’s submission scripts, added the new Silvet note transcription plugin, and submitted them.

Here goes with the outcomes. There is also an overview poster published by MIREX. See last year’s article for more information about what the tasks consist of.

Multiple Fundamental Frequency Estimation and Tracking

The only category we didn’t submit to last year. This is the problem of deducing which notes are being played, and at what times, in music where more than one note happens at once. I submitted the Silvet plugin which is based on a method by Emmanouil Benetos that had performed well in MIREX in an earlier year.

The results for this category are divided into two parts, multiple fundamental frequency estimation and note tracking. I submitted a script only for the note tracking part. I would describe the performance of our plugin as “correct”, in that it was reliably mid-pack across the board, pretty good for piano transcription, and generally marginally better than the MIREX 2012 submission which inspired it.

This was a fairly popular category this year, and one submission in particular improved quite substantially on previous years’ results—it may be no coincidence that that submission’s abstract employs the phrase-of-the-moment deep learning.

Audio Onset Detection

The same two submissions as last year (OnsetsDS and QM Onset Detector) and exactly the same results—the test dataset is unchanged and the plugins are entirely deterministic. Last year I remarked that our methods are quite old and other submissions should improve on them over time, but this year’s top methods were actually no improvement on last year’s.

Audio Beat Tracking

Again the same two submissions as last year (BeatRoot and QM Tempo Tracker) and exactly the same results (1, 2, 3), behind the front-runners but still reasonably competitive. While the best-performing methods continue to advance, it’s clear that beat tracking is still not a solved problem.

Audio Key Detection

Last year we entered a plugin that wasn’t expected to do very well here, and it swept the field. This year everyone else seems to have dropped out, so our repeat submission was in fact the only entry! (It got the same results as last year.)

Audio Chord Estimation

This is interesting partly because our submission (Chordino) performed very well last year but the evaluation metric has since changed.

Sadly, there were only three submissions this year. Chordino still looks good in all three datasets (1, 2, 3) but it is now ranked second rather than first for all three. I’m a bit disappointed that the new leading submission seems to be lacking a descriptive abstract.

Categories we could have entered but didn’t

Audio Melody Extraction

Last year’s submission wasn’t really good enough to repeat.

Audio Downbeat Estimation

I overlooked this task, which was new this year. Otherwise I could have submitted the QM Bar and Beat Tracker plugin.

Audio Tempo Estimation, Structural Segmentation

These categories had an earlier submission deadline than the rest, and stupidly I missed it.

SoundSoftware tutorial at AES 53

I’ll be co-presenting the first tutorial session at the Audio Engineering Society 53rd Conference on Semantic Audio, this weekend.

(It’s the society’s 53rd Conference, and it happens to be about semantic audio. It’s not their 53rd conference about semantic audio. In fact it’s their second: that was also the theme of the AES 42nd Conference in 2011.

What is semantic audio? Good question, glad you asked. I believe it refers to extraction or estimation of any semantic material from audio, including speech recognition and music information retrieval.)

My tutorial, for the SoundSoftware project, is about developing better and more reliable software during research work. That’s a very deep subject, so at best we’ll barely hint at a few techniques during one programming exercise:

  • making readable experimental code using the IPython Notebook, and sharing code for review with colleagues and supervisors;
  • using version control software to manage revisions and escape disaster;
  • modularising and testing any code that can be used in more than one experiment;
  • packaging, publishing, and licensing code;
  • and the motivations for doing the above.

We presented a session at the Digital Audio Effects (DAFx) conference in 2012 which covered much of this material in presentation style, and a tutorial at the International Society for Music Information Retrieval (ISMIR) in 2012 which featured a “live” example of test-driven development in research software. You can find videos and slides from those tutorials here. The theme of this one is similar, and I’ll be reusing some code from the ISMIR tutorial, but I hope we can make this one a bit more hands-on.


QM Vamp Plugins in MIREX

During the past 7 years or so, we in the Centre for Digital Music have published quite a few audio analysis methods in the form of Vamp plugins: bits of software that you can download and use yourself with Sonic Visualiser, run on a set of audio recordings with Sonic Annotator, or use with your own applications.

Some of these methods were, and remain, pretty good. Some are reasonably good, simplified versions of work that was state-of-the-art at the time, but might not be any more. Some have always been less impressive. They are all available free, with source code—or with commercial licences for companies that want to incorporate them into their products.

This year we thought we should give them a trial against the current state-of-the-art in academia. Luis Figueira and I prepared a number of entries for the annual Music Information Retrieval Evaluation Exchange (or MIREX), submitting a Vamp plugin from our group in every category where we had one available.

MIREX, which is an excellent large-scale community endeavour organised by J Stephen Downie at UIUC, works by running your methods across a known test dataset of music recordings, comparing the results against “ground truth” produced in advance by humans, and publishing scores for how well each method compares.

Here’s how we got on for each evaluation task.

Audio Onset Detection

(That is, identifying the times in the music recording where each of the individual notes begin.)

We submitted two plugins here: the QM Onset Detector plugin implementing a number of (by now) standard methods, from Juan Bello and others back in 2005; and OnsetsDS, a refinement by Dan Stowell aimed at real-time use (so not directly relevant to this task). Both did modestly well. These methods have been published for a long time and become widely known, so it would be a disappointment if current work didn’t improve on them.

Audio Beat Tracking

(Tapping along with the beat.)

Here we entered the QM Tempo Tracker plugin, based on the work of Matthew Davies, and a Vamp plugin implementation of Simon Dixon‘s BeatRoot beat tracker. Both of these are now quite old methods (especially BeatRoot, although the plugin is new). The results for three datasets are here, here and here.

Both the original BeatRoot and a different version of Matthew Davies’ work were included in the MIREX evaluation back in 2006, and the ’06 dataset is one of the three used this year. So you can compare the 2006 versions here and the 2013 evaluations over here. They perform quite similarly, which is a relief. You can also see that the state of the art has moved on a bit.

Audio Tempo Estimation

(Coming up with an overall estimate in beats-per-minute of the tempo of a recording. Presumably the evaluation uses clips in which the tempo doesn’t vary.)

We entered the same QM Tempo Tracker plugin, from Matthew Davies, as used in the Beat Tracking evaluation. It doesn’t quite suit the evaluation metric, because the plugin estimates tempo changes rather than the two fixed tempo estimates (higher and lower, to allow for beat-period “octave” errors) the task calls for—but it performed pretty well. Again, a related method was evaluated on the same dataset in MIREX ’06 with quite similar results.

Audio Key Detection

(Estimating the overall key of the piece, insofar as that makes sense.)

We entered the QM Key Detector plugin for this task. This plugin, from Katy Noland back in 2007, is straightforward and fast, and is intended to detect key changes rather than the overall key.

To everyone’s surprise (including Katy’s) it scored better than any other entry, and indeed better than any entry from the past four years! The test dataset is pretty simplistic, but this is a nice result anyway.

Audio Melody Extraction

(Writing down the notes for the main melody from a recording which may have more than one instrument.)

Here we submitted my own cepstral pitch tracker plugin. This is not actually a melody extractor at all, but a monophonic pitch tracker with note estimation intended for solo singing. And it was developed as an exercise in test-driven development, rather than as a research outcome. It was not expected to do well. It actually did come out well in one dataset (solo vocal?), but it got weak results in the other three. I’m quite excited about having submitted something all-my-own-work to MIREX though.

Audio Chord Estimation

(Annotating the chord changes in a piece based on the recording.)

For this task we entered the Chordino plugin from Matthias Mauch. This plugin is much the same as the “Simple Chord Estimate” method that Matthias entered for MIREX in 2010; it got the same excellent results then and now for the dataset that was used in both years, and it also got the highest scores in the other dataset.

Structural Segmentation

(Dividing a song up into parts based on musical structure. The parts might correspond to verse, chorus, bridge, etc—though the segmenter is not required to label them, only to identify which ones have similar structural purpose.)

Two entries here. The Segmentino plugin from Matthias Mauch is fairly new, and is the only submission we made for which plugin code has not yet been released—we hope to remedy that soon. And we also entered Mark Levy‘s QM Segmenter plugin, an older and more lightweight method.

The results for different test datasets are here, here, here and here. The evaluation metrics are slightly baffling (for me anyway). I have been advised to concentrate on

  • Frame pair clustering F-measure: how well corresponding sections correspond; this measures getting matching segment types right. Segmentino does very well here, except in one dataset for some reason. The QM Segmenter is not as good, but actually not so bad either.
  • Segment boundary recovery evaluation measures: how accurately the segmenters report the precise locations of segment boundaries. Neither of our submissions does this very well, although Segmentino does well on precision at 3 seconds, meaning the segment boundaries it does report are usually fairly close to the real ones.

This is a pretty good result—I think!