# MIREX 2018 submissions

The 2018 edition of MIREX, the Music Information Retrieval Evaluation eXchange, was the sixth in a row for which we at the Centre for Digital Music submitted a set of Vamp audio analysis plugins for evaluation. For the third year in a row, the set of plugins we submitted was entirely unchanged — these are increasingly antique methods, but we have continued to submit them with the idea that they could provide a useful year-on-year baseline at least. It also gives me a good reason to take a look at the MIREX results and write this little summary post, although I’m a bit late with it this year, having missed the end of 2018 entirely!

For reference, the past five years’ posts can be found at: 2017, 2016, 2015, 2014, and 2013.

### Structural Segmentation

No results appear to have been published for this task in 2018; I don’t know why. Last time around, ours was the only entry. Maybe it was the only entry again, and since it was unchanged, there was no point in running the task.

### Multiple Fundamental Frequency Estimation and Tracking

After 2017’s feast with 14 entries, 2018 is a famine with only 3, two of which were ours and the third of which (which I can’t link to, because its abstract is missing) was restricted to a single subtask, in which it got reasonable results. Results pages are here and here.

### Audio Onset Detection

Almost as many entries as last time, and a new convolutional network from Axel Röbel et al disrupts the tidy sweep of Sebastian Böck’s group at the top of the results table. Our simpler methods are squarely at the bottom this time around. Röbel’s submission has a nice informative abstract which casts more light on the detailed result sets and is well worth a read. Results here.

### Audio Beat Tracking

Pure consolidation: all the 2018 entries are repeats from 2017, and all perform identically (with the methods from Böck et al doing better than our plugins). Every year I say that this doesn’t feel like a solved problem, and it still doesn’t — the results we’re seeing here still don’t seem all that close to human performance, but perhaps there are misleading properties to the evaluation. Results here, here, here.

### Audio Tempo Estimation

This is a busier category, with a new dataset and a few new submissions. The new dataset is most intriguing: all of the submissions perform better with the new dataset than the older one, except for our QM Tempo Tracker plugin, which performs much, much worse with the new one than the old!

I believe the new dataset is of electronic dance music, so it’s likely that much of it is high tempo, perhaps tripping our plugin into half-tempo octave errors. We could probe this next time by tweaking the submission protocol a little. Submissions are asked to output two tempo estimates, and the results report whether either of them was correct. Because our plugin only produces one estimate, we lazily submit half of that estimate as our second estimate (with a much lower salience score). But if our single estimate was actually half of the “true” value, as is plausible for fast music, we would see better scores from submitting double instead of half as the second estimate.

Results are here and here.

### Audio Key Detection

Some novelty here from a pair of template-based methods from the Universitat Autonoma de Barcelona, one attributed to Galin and Castells-Rufas and the other to Castells-Rufas and Galin. Their performance is not a million miles away from our own template-based key estimation plugin.

The strongest results appear to be from a neural network method from Korzeniowski et al at JKU, an updated version of one of last year’s better-performing submissions, an implementation of which can be found in the madmom library.

Results are here.

### Audio Chord Estimation

A lively (or daunting) category. A team from Fudan University in Shanghai, whence came two of the previous year’s strongest submissions, is back with another new method, an even stronger set of results, and once again a very readable abstract; and the JKU team have an updated model, just as in the key detection category, which also performs extremely impressively. Meanwhile a separate submission from JKU, due to Stefan Gasser and Franz Strasser, would have been at the very top had it been submitted a year earlier, but is now a little way behind. Convolutional neural networks are involved in all of these.

Our Chordino submission can still be described as creditable. Results can be found here.

# EasyMercurial v1.4

Today’s second post about a software release will be a bit less detailed than the first.

I’ve just coordinated a new release of EasyMercurial, a cross-platform user interface for version control software that was previously updated in February 2013. It looks a bit like this.

EasyMercurial was written with a bit of academic funding from the SoundSoftware project, which ran from 2010 to 2014. The idea was to make something as simple as possible to teach and understand, and we believed that the Mercurial version-control system was the simplest and safest to learn so we should base it on that. The concurrent rise of Github, and resulting dominance of Git as the version control software that everyone must learn, took the wind out of its sails. We eventually tacitly accepted that the v1.3 release made in 2013 was “finished”, and abandoned the proposed feature roadmap. (It’s open source, so if someone else wanted to maintain it, they could.)

EasyMercurial has continued to be a nice piece of software to use, and I use it myself on many projects, so when a recent change in the protocol support at the world’s biggest public Mercurial hosting site, Bitbucket, broke the Windows version of EasyMercurial 1.3, I didn’t mind having an excuse to update it. So now we have version 1.4.

This release doesn’t change a great deal. It updates the code to use the Qt5 toolkit and improves support for hi-dpi displays. I’ve dragged the packaging process up-to-date and re-packaged using current Qt, Mercurial (where bundled), and KDiff3 diff-merge code.

Mercurial usage itself has moved on in most quarters since EasyMercurial was conceived. EasyMercurial assumes that you’ll be using named branches for branching development, but these days using bookmarks for lightweight branching (more akin to Git branching) is more popular — EasyMercurial shows bookmarks but can’t do anything useful with them. Other features of modern Mercurial that could have been very helpful in a simple application like this, such as phases, are not supported at all.

Anyway: EasyMercurial v1.4. Free for Windows, Linux, and macOS. Get it here.

# Sonic Visualiser v3.2

Another release of Sonic Visualiser is out. This one, version 3.2, has some significant visible changes, in contrast to version 3.1 which was more behind-the-scenes.

The theme of this release could be said to be “oversampling” or “interpolation”.

### Waveform interpolation

Ever since the Early Days, the waveform layer in Sonic Visualiser has had one major limitation: you can’t zoom in any closer (horizontally) than one pixel per sample. Here’s what that looks like — this is the closest zoom available in v3.1 or earlier:

This isn’t such a big deal with a lower-resolution display, since you don’t usually want to interact with individual samples anyway (you can’t edit waveforms in Sonic Visualiser). It’s a bigger problem with hi-dpi and “retina” displays, on which individual pixels can’t always be made out.

Why this limitation? It allowed an integer ratio between samples and pixels to be used internally, which made it a bit easier to avoid rounding errors. It also sidestepped any awkward decisions about how, or whether, to show a signal in between the sample points.

(In a waveform editor like Audacity it is necessary to be able to interact with individual samples, so some decision has to be made about what to show between the sample points when zoomed in closely. Older versions of Audacity connected the sample points with straight lines, a decision which attracted criticism as misrepresenting how sampling works. More recent versions show sample points on separate stems without connecting lines.)

In Sonic Visualiser v3.2 it’s now possible to zoom closer than one pixel per sample, and we show the signal oversampled between the sample points using sinc interpolation. Here’s an example from the documentation, showing the case where the sample values are all zero but for a single sample with value 1:

The sample points are the little square dots, and the wiggly line passing through them is the interpolated signal. (The horizontal line is just the x axis.) The principle here is that, although there are infinitely many ways to join the dots, there is only one that is “smooth” enough to be expressible as a sum of sinusoids of no higher frequency than half the sampling rate — which is the prerequisite for reconstructing a signal sampled without aliasing. That’s what is shown here.

The above artificial example has a nice shape, but in most cases with real music the interpolated signal will not be very different from just joining the dots with a marker. It’s mostly relevant in extreme cases. Let’s replace the single sample of value 1 above with a pair of consecutive samples of value 0.5:

Now we see that the interpolated signal has a peak between the two samples with a greater level than either sample. The peak sample value is not a safe indication of the peak level of the analogue signal.

Incidentally, another new feature in v3.2 is the ability to import audio data from a CSV or similar data file rather than only from standard audio formats. That made it much easier to set up the examples above.

### Spectrogram and spectrum oversampling

The other oversampling-related feature added in v3.2 appears in the spectrogram and spectrum layers. These layers now have an option to set an oversampling level, from the default “1x” up to “8x”.

This option increases the length of the short-time Fourier transform used to generate the spectrum, by padding the time-domain signal window with additional zero-valued samples before calculating the transform. This results in an oversampled frequency-domain output, with a higher visual resolution than would have been obtained from the original, un-zero-padded sample window. The result is a smoother spectrum in which the locations of peaks can be seen with a little more accuracy, somewhat like the waveform example above.

This is nice in principle, but it can be deceiving.

In the case of waveform oversampling, there can be only one “matching” signal, given the sample points we have and the constraints of the sampling theorem. So we can oversample as much as we like, and all that happens is that we approximate the analogue signal more closely.

But in a short-time spectrum or spectrogram, we only use a small window of the original signal for each spectrum or spectrogram-column calculation. There is a tradeoff in the choice of window size (a longer window gives better frequency discrimination at the expense of time discrimination) but the window always exposes only a small part of the original signal, unless that signal is extremely short. Zero-padding and using a longer transform oversamples the output to make it smoother, but it obviously uses no extra information to do it — it still has no access to samples that were not in the original window. A higher-resolution output without any more information at the input can appear more effective at discriminating between frequencies than it really is.

Here’s an example. The signal consists of a mixture of two sine waves one tone apart (440 and 493.9 Hz). A log-log spectrum (i.e. log frequency on x axis, log magnitude on y) with an 8192-point short-time Fourier transform looks like this:

A log-log spectrum with a 1024-point STFT looks like this1:

The 1024-sample input isn’t long enough to discriminate between the two frequencies — they’re close enough that it’s necessary to “hear” a longer fragment than this in order to determine that there are two frequencies at all2.

Add 8x oversampling to that last example, and it looks like this:

This is very smooth and looks super detailed, and indeed we can use it to read the peak value with more accuracy — but the peak is deceptive, because it is still merging the two frequency components. In fact most of the detail here consists of the frequency response of the 1024-point windowing function used to shape the time-domain window (it’s a Hann window in this case).

Also, in the case of peak frequencies, Sonic Visualiser might already provide a way to get the same information more accurately — its peak-frequency identification in both spectrum and spectrogram views uses phase unwrapping instead of spectrum interpolation to estimate the frequencies of stable harmonics, which gives very good results if the sound is indeed harmonic and stable.

Finally, there’s a limitation in Sonic Visualiser’s implementation of this oversampling feature that eliminates one potential use for it, which is to choose the length of the Fourier transform in order to align bin frequencies with known or expected frequency components of the signal. We can’t generally do that here, since Sonic Visualiser still only supports a few fixed multiples of a power-of-two window size.

In conclusion: interesting if you know what you’re looking at, but use with caution.

1 Notice that we are connecting sample points in the spectrum with straight lines here — the same thing I characterised as a bad idea in the discussion of waveforms above. I think this is more forgivable here because the short-time transform output is not a sampled version of an original signal spectrum, but it’s still a bit icky

2 This is not exactly true, but it works for this example

# Rubber Band Library v1.8.2

I have finally managed to get together all the bits that go into a release of the Rubber Band library, and so have just released version 1.8.2.

The Rubber Band library is a software library for time-stretching and pitch-shifting of audio, particularly music audio. That means that it takes a recording of music and adjusts it so that it plays at a different speed or at a different pitch, and if desired, it can do that by changing the speed and pitch “live” as the music plays. This is impossible to do perfectly: essentially you are asking software to recreate what the music would have sounded like if the same musicians had played it faster, slower, or in a different key, and there just isn’t enough information in a recording to do that. It changes the sound and is absolutely not a reversible transformation. But Rubber Band does a pretty nice job. For anyone interested, I wrote a page (here) with a technical summary of how it does it.

I originally wrote this library between 2005 and 2007, with a v1.0 release at the end of 2007. My aim was to provide a useful tool for open source GPL-licensed audio applications on Linux, like Ardour or Rosegarden, with a commercial license as an afterthought. As so often happens, I seriously underestimated the work involved in getting the library from “working” (a few weeks of evening and weekend coding) to ready to use in production applications (two years).

It has now been almost six years since the last Rubber Band release, and since this one is just a bugfix release, we can say the library is pretty much finished. I would love to have the time and mental capacity for a version 2: there are many many things I would now do differently. (Sadly, the first thing is that I wouldn’t rely on my own ears for basic testing any more—in the intervening decade my hearing has deteriorated a lot and it amazes me to think that I used to accept it as somehow authoritative.)

In spite of all the things I would change, I think this latest release of version 1 is pretty good. It’s not the state-of-the-art, but it is very effective, and is in use right now in professional audio applications across the globe. I hope it can be useful to you somehow.

# Repoint: A manager for checkouts of third-party source code dependencies

I’ve just tagged v1.0 of Repoint, a tool for managing library source code in a development project. Conceptually it sits somewhere between Mercurial/Git submodules and a package manager like npm. It is intended for use with languages or environments that don’t have a favoured package manager, or in situations where the dependent libraries themselves aren’t aware that they are being package-managed. Essentially, situations where you want, or need, to be a bit hands-off from any actual package manager. I use it for projects in C++ and SML among other things.

Like npm, Bundler, Composer etc., Repoint refers to a project spec file that you provide that lists the libraries you want to bring in to your project directory (and which are brought in to the project directory, not installed to a central location). Like them, it creates a lock file to record the versions that were actually installed, which you can commit for repeatable builds. But unlike npm et al, all Repoint actually does is clone from the libraries’ upstream repository URLs into a subdirectory of the project directory, just as happens with submodules, and then report accurately on their status compared with their upstream repositories later

The expected deployment of Repoint consists of copying the Repoint files into the project directory, committing them along with everything else, and running Repoint from there, in the manner of a configure script — so that developers generally don’t have to install it either. It’s portable and it works the same on Linux, macOS, or Windows. Things are not always quite that simple, but most of the time they’re close.

At its simplest, Repoint just checks stuff out from Git or whatever for you, which doesn’t look very exciting. An example on Windows:

Simple though Repoint’s basic usage is, it can run things pretty rigorously across its three supported version-control systems (git, hg, svn), it gets a lot of annoying corner cases right, and it is solid, reliable, and well-tested across platforms. The README has more documentation, including of some more advanced features.

### Is this of any use to me?

Repoint might be relevant to your project if all of the following apply:

• You are developing with a programming language or environment that has no obvious single answer to the “what package manager should I use?” question; and
• Your code project depends on one or more external libraries that are published in source form through public version-control URLs; and
• You can’t assume that a person compiling your code has those libraries installed already; and
• You don’t want to copy the libraries into your own version-control repo to form a Giant Monorepo; and
• Most of your dependent libraries do not similarly depend on other libraries (Repoint doesn’t support recursive dependencies at all).

Beyond mere relevance, Repoint might be actively useful to your project if any of the following also apply:

• The libraries you’re using are published through a mixture of version-control systems, e.g. some use Git but others Mercurial or Subversion; or
• The libraries you’re using and, possibly, your own project might change from one version-control system to another at some point in the future.

See the README for more caveats and general documentation.

### Example

The biggest current example of a project using Repoint is Sonic Visualiser. If you check out its code from Github or from the SoundSoftware code site and run its configure script, it will call out to repoint install to get the necessary dependencies. (On platforms that don’t use the configure script, you have to run Repoint yourself.)

Note that if you download a Sonic Visualiser source code tarball, there is no reference to Repoint in it and the Repoint script is never run — Repoint is very much an active-developer tool, and it includes an archive function that bundles up all the dependent libraries into a tarball so that people building or deploying the end result aren’t burdened with any additional utilities to use.

I also use Repoint in various smaller projects. If you’re browsing around looking at them, note that it wasn’t originally called Repoint — its working title in earlier versions was vext and I haven’t quite finished switching the repos over. Those earlier versions work fine of course, they just use different words.

# What does a convolutional neural net actually do when you run it?

Convolutional neural networks (or convnets or CNNs) are a staple of “deep learning”. There are many tutorials available that describe what they do, either mathematically or via quasi-mystical appeals to intuition, and introduce how to train and use them, often with image classification examples.

This post has a narrower focus. As a programmer, I am happy processing floating-point data in languages like C++, but I’m more likely to be building applications with pre-trained models than training new ones. Say I have a pre-trained convnet image classifier, and I use it to carry out a single classification of one image (“tell me whether this shows a pig, cow, or sheep”) — what does that actually mean, in terms of code?

I’ll go through an example in the form of a small hand-written C++ convolutional net that contains only the “execution” code, no training logic. It will be very much an illustrative implementation, not production code. It will use pre-trained data adapted from a similar model in Keras, on which more later.

The specific example I am using is a typical image classification one: identifying which of five types of flower is visible in a small colour image (inspired by this tutorial). All the code I’m describing can be found in this Github repository.

### Networks, layers, weights, training

A network in this context is a pipeline of functions through which data is passed, with the output of each function going directly to the input of the next one. Each of these functions is known as a layer.

Our example will begin by supplying the image data as input to the first layer, and end with the final layer returning a set of five numbers, one for each kind of flower the network knows about, indicating how likely the network thinks it is that the image shows that kind of flower.

So to run one classification, we just have to get the image data into the right format and pass it through each layer function in turn, and out pops the network’s best guess.

The layers themselves perform some mathematical operation on the data. Some of them do so by making use of a set of other values, provided separately, which they might multiply with the inputs in some way (in which case they are known as weights) or add to the return values (known as biases).

Coming up with suitable weight and bias values for these layers is what training is for. To train a network, the weights and biases for each such layer are randomly initialised, then the network is repeatedly run to provide classifications of known inputs; each of the network’s guesses is compared with the known class of the input, and the weights and biases of the network are updated depending on how accurate the classification was. Eventually, with luck, they will converge on some reliable values. Training is both difficult and tedious, so here we will happily leave it to machine-learning researchers.

These trainable layers are then typically sandwiched by fixed layers that provide non-linearities (also known as activation functions) and various data-manipulation bits and bobs like max-pooling. More on these later.

The layers are pure functions: a layer with a given set of weights and biases will always produce the same output for a given input. This is not always true within recurrent networks, which model time-varying data by maintaining state in the layers, but it is true of classic convolutional nets.

### Our example network

This small network consists of four rounds of convolution + max-pooling layers, then two dense layers.

I constructed and trained this using Keras (in Python) based on a “flower pictures” dataset downloaded from the Tensorflow site, with 5 labelled classes of flowers. My Github repository contains a script obtain-data.sh which downloads and prepares the images, and a Python program with-keras/train.py which trains this network on them and exports the trained weights into a C++ file. The network isn’t a terribly good classifier, but that doesn’t bother me here.

I then re-implemented the model in C++ without using a machine-learning library. Here’s what the pipeline looks like. This can be found in the file flower.cpp in the repository. All of the functions called here are layer functions defined elsewhere in the same file, which I’ll go through in a moment. The variables with names beginning weights_ or biases_ are the trained values exported from Keras. Their definitions can be found in weights.cpp.

vector<float>
classify(const vector<vector<vector<float>>> &image)
{
vector<vector<vector<float>>> t;

t = convolve(t, weights_firstConv, biases_firstConv);
t = activation(t, "relu");
t = maxPool(t, 2, 2);

t = convolve(t, weights_secondConv, biases_secondConv);
t = activation(t, "relu");
t = maxPool(t, 2, 2);

t = convolve(t, weights_thirdConv, biases_thirdConv);
t = activation(t, "relu");
t = maxPool(t, 2, 2);

t = convolve(t, weights_fourthConv, biases_fourthConv);
t = activation(t, "relu");
t = maxPool(t, 2, 2);

vector<float> flat = flatten(t);

flat = dense(flat, weights_firstDense, biases_firstDense);
flat = activation(flat, "relu");

flat = dense(flat, weights_labeller, biases_labeller);
flat = activation(flat, "softmax");

return flat;
}


Many functions are used for more than one layer — for example we have a single convolve function used for four different layers, with different weights, bias values, and input shapes each time. This reuse is possible because layer functions don’t retain any state from one call to the next.

You can see that we pass the original image as input to the first layer, but subsequently we just take the return value from each function and pass it to the next one. These values have types that we are expressing as nested vectors of floats. They are all varieties of tensor, on which let me digress for a moment:

### Tensors

For our purposes a tensor is a multidimensional array of numbers, in our case floats, of which vectors and matrices are special cases. A tensor has a shape, which we can write as a list of sizes, and the length of that list is the rank. (The word dimension can be ambiguous here and I’m going to try to avoid it… apart from that one time I used it a moment ago…)

Examples:

• A matrix is a rank-2 tensor whose shape has two values, height and width. The matrix $\begin{bmatrix}1.0&3.0&5.0\\2.0&4.0&6.0\end{bmatrix}$ has shape [2, 3], if we are giving the height first.
• A C++ vector of floats is a rank-1 tensor, whose shape is a list of one value, the number of elements in the vector. The vector { 1.0, 2.0, 3.0 } has the shape [3].
• A single number can also be considered a tensor, of rank 0, with shape [].

All of the values passed to and returned from our network layer functions are tensors of various shapes. For example, a colour image will be represented as a tensor of rank 3, as it has height, width, and a number of colour channels.

In this code, we are storing tensors using C++ vectors (for rank 1), vectors of vectors (for rank 2), and so on. This is transparent, makes indexing easy, and allows us to see the rank of each tensor directly from its type.

Production C++ frameworks don’t do it this way — they typically store tensors as values interleaved into a single big array, wrapped up in a tensor class that knows how to index into it. With such a design, unlike our code, all tensors will have the same C++ class type regardless of their rank.

There is a practical issue with the ordering of tensor indices. Knowing the shape of a tensor is not enough: we also have to know which index is which. For example, an RGB image that is 128 pixels square has shape [128, 128, 3] if we index the height, width, and colour channels in that order, or [3, 128, 128] if we separate out the individual colour channels and index them first. Unfortunately, both layouts are in common use. Keras exposes the choice through a flag and uses the names channels_last for the former layout, historically the default in Tensorflow as it often runs faster on CPUs, and channels_first for the latter, used by many systems as it parallelises better with GPUs. The code in this example uses channels_last ordering.

This definition of a tensor is good enough for us, but in mathematics a tensor is not just an array of numbers but an object that sits in an algebraic space and can be manipulated in ways intrinsic to that space. Our data structures make no attempt to capture this, just as a C++ vector makes no effort to capture the properties of a mathematical vector space.

### Layers used in our model

Each layer function takes a tensor as input, which I refer to within the function as in. Trained layers also take further tensors containing the weights and biases for the layer.

#### Convolution layer

Here’s the nub of this one, implemented in the convolve function.

for (size_t k = 0; k < nkernels; ++k) {
for (size_t y = 0; y < out_height; ++y) {
for (size_t x = 0; x < out_width; ++x) {
for (size_t c = 0; c < depth; ++c) {
for (size_t ky = 0; ky < kernel_height; ++ky) {
for (size_t kx = 0; kx < kernel_width; ++kx) {
out[y][x][k] +=
weights[ky][kx][c][k] *
in[y + ky][x + kx][c];
}
}
}
out[y][x][k] += biases[k];
}
}
}


Seven nested for-loops! That’s a lot of brute force.

The weights for this layer, supplied in a rank-4 tensor, define a set of convolution kernels (or filters). Each kernel is a rank-3 tensor, width-height-depth. For each pixel in the input, and for each channel in the colour depth, the kernel values for that channel are multiplied by the surrounding pixel values, depending on the width and height of the kernel, and summed into a single value which appears at the same location in the output. We therefore get an output tensor containing a matrix of (almost) the same size as the input image, for each of the kernels.

(Note for audio programmers: FFT-based fast convolution is not generally used here, as it works out slower for small kernels)

Vaguely speaking, this has the effect of transforming the input into a space determined by how much each pixel’s surroundings have in common with each of the learned kernel patterns, which presumably capture things like whether pixels have a common colour or whether there is common activity between a pixel and neighbouring pixels in a particular direction due to edges or lines in the image.

Usually the early convolution layers in an image classifier will explode the amount of data being passed through the network. In our small model, the first convolution layer takes in a 128×128 image with 3 colour channels, and returns a 128×128 output for each of 32 learned kernels: nearly ten times as much data, which will then be reduced gradually through later layers. But fewer weight values are needed to describe a convolution layer compared with the dense layers we’ll see later in our network, so the size of the trained layer (what you might call the “code size”) is relatively smaller.

I said above that the output matrices were “almost” the same size as the input. They have to be a bit smaller, as otherwise the kernel would overrun the edges. To allow for this, we precede the convolution layer with a…

This can be found in the zeroPad function. It creates a new tensor filled with zeros and copies the input into it:

for (size_t y = 0; y < in_height; ++y) {
for (size_t x = 0; x < in_width; ++x) {
for (size_t c = 0; c < depth; ++c) {
}
}
}


All this does is add a zero-valued border around all four edges of the image, in each channel, in order to make the image sufficiently bigger to allow for the extent of the convolution kernel.

Most real-world implementations of convolution layers have an option to pad the input when they do the convolution, avoiding the need for an explicit zero-padding step. In Keras for example the convolution layers have a padding parameter that can be either valid (no extra padding) or same (provide padding so the output and input sizes match). But I’m trying to keep the individual layer functions minimal, so I’m happy to separate this out.

#### Activation layer

Again, this is something often carried out in the trained layers themselves, but I’ve kept it separate to simplify those.

This introduces a non-linearity by applying some simple mathematical function to each value (independently) in the tensor passed to it.

Historically, for networks without convolution layers, the activation function has usually been some kind of sigmoid function mapping a linear input onto an S-shaped curve, retaining small differences in a critical area of the input range and flattening them out elsewhere.

The activation function following a convolution layer is more likely to be a simple rectifier. This gives us the most disappointing acronym in all of machine learning: ReLU, which stands for “rectified linear unit” and means nothing more exciting than

if (x < 0.0) {
x = 0.0;
}


applied to each value in the tensor.

We have a different activation function right at the end of the network: softmax. This is a normalising function used to produce final classification estimates that resemble probabilities summing to 1:

float sum = 0.f;
for (size_t i = 0; i < sz; ++i) {
out[i] = exp(out[i]);
sum += out[i];
}
if (sum != 0.f) {
for (size_t i = 0; i < sz; ++i) {
out[i] /= sum;
}
}


#### Max pooling layer

The initial convolution layer increases the amount of data in the network, and subsequent layers then reduce this into a smaller number of hopefully more meaningful higher-level values. Max pooling is one way this reduction happens. It reduces the resolution of its input in the height and width axes, by taking only the maximum of each NxM block of pixels, for some N and M which in our network are both always 2.

From the function maxPool:

for (size_t y = 0; y < out_height; ++y) {
for (size_t x = 0; x < out_width; ++x) {
for (size_t i = 0; i < pool_y; ++i) {
for (size_t j = 0; j < pool_x; ++j) {
for (size_t c = 0; c < depth; ++c) {
float value = in[y * pool_y + i][x * pool_x + j][c];
out[y][x][c] = max(out[y][x][c], value);
}
}
}
}
}


#### Flatten layer

Interleaves a rank-3 tensor into a rank-1 tensor, otherwise known as a single vector.

for (size_t y = 0; y < height; ++y) {
for (size_t x = 0; x < width; ++x) {
for (size_t c = 0; c < depth; ++c) {
out[i++] = in[y][x][c];
}
}
}


Why? Because that’s what the following dense layer expects as its input. By this point we are saying that we have used as much of the structure in the input as we’re going to use, to produce a set of values that we now treat as individual features rather than as a structure.

With a library that represents tensors using an interleaved array, this will be a no-op, apart from changing the nominal rank of the tensor.

#### Dense layer

A dense or fully-connected layer is an old-school neural network layer. It consists of a single matrix multiplication, multiplying the input vector by the matrix of learned weights, then a vector addition of bias values. From our dense function:

vector<float> out(out_size, 0.f);

for (size_t i = 0; i < in_size; ++i) {
for (size_t j = 0; j < out_size; ++j) {
out[j] += weights[i][j] * in[i];
}
}

for (size_t j = 0; j < out_size; ++j) {
out[j] += biases[j];
}


Our network has two of these layers, the second of which produces the final prediction values.

That’s pretty much it for the code. There is a bit of boilerplate around each function, and some extra code to load in an image file using libpng, but that’s about all.

There is one other kind of layer that appears in the original Keras model: a dropout layer. This doesn’t appear in our code because it is only used while training. It discards a certain proportion of the values passed to it as input, something that can help to make the following layer more robust to error when trained.

## Further notes

### Precision and repeatability

Training a network is a process that involves randomness. A given model architecture can produce quite different results on separate training runs.

This is not the case when running a trained network to make a classification or prediction. Two implementations of a network that both use the same basic arithmetic datatypes (e.g. 32-bit floats) should produce the same results when given the same set of trained weights and biases. If you are trying to reproduce a network’s behaviour using a different library or framework, and you have the same trained weights to use with both implementations, and your results look only “sort of” similar, then they’re probably wrong.

The differences between implementations in terms of channel ordering and indexing can be problematic in this respect. Consider the convolution function with its seven nested for-loops. Load any of those weights with the wrong index or in the wrong order and of course it won’t work, and there are a lot of possible permutations there. There seems to be more than one opinion about how kernel weights should be indexed, for example, and it’s easy to get them upside-down if you’re converting from one framework to another.

### Should I use code like this in production?

Probably not!

First and most trivially, this isn’t a very efficient arrangement of layers. Separating out the zero-padding and activation functions means more memory allocation and copying.

Second, there are many ways to speed up the expensive brute-force layers (that is, the convolution and dense layers) dramatically even on the CPU, using vectorisation, careful cache and memory management, and faster matrix and convolution algorithms. If you don’t take advantage of these, you’re wasting time for no purpose; if you do, you’re repeating work other people are doing in libraries already.

I adapted the example network to use the tiny-dnn header-only C++ library, and it ran roughly twice as fast as the code above. (I’m a sucker for libraries named tiny-something, even if, as in this case, they are no longer all that tiny.) That version of the example code can be found in the with-tiny-dnn directory in the repo. It’s ugly, because I had to load and convert the weights from a system that uses channels-last format into one that expects channels-first. But I only had to write that code once, and it wouldn’t have been necessary if I’d trained the model in a channels-first layout. I might also have overlooked some simpler way to do it.

Third, this approach is fragile in the face of changes to the model architecture, which have to be duplicated exactly across the model implementation and the separately-loaded weights. It would be preferable to load the model architecture into your program, rather than load the weights into a hand-written architecture. To this end it seems to make sense to use a library that supports importing and exporting models automatically.

All this said, I find it highly liberating to realise that one could just sketch out these few lines of code and have a working result.