Performance Practice as Unanticipated Pit

Last Tuesday afternoon I went to the weekly meeting of my research group.

(But this isn’t a post about work.)

2013-06-04-294_1The weekly meetings have a rotating series of themes: this week’s was “Music Performance and Expression”. Accordingly, the first part of this meeting was a bit of a concert. To open the subject, Elaine Chew (piano) and Kat Agres (cello) played part of Brahms’ first cello sonata and talked about how players coordinate with one another.

As a lapsed cellist, though never of this standard, I found it surprisingly difficult to listen to. I was surprised by how surprisingly difficult I found it. I thought about leaving, and then I decided to put on a nice plain face.

The music itself is troubling, but I don’t think that’s all of it. I listen to a bit of cello music and I know this sonata moderately well. I don’t think I have any problem with something that is only a cello performance, no matter what the music.

It’s the analytical part I have trouble with. The closer the subject gets to analysis of how it is done, the more it raises difficult questions in me. I still work in a music-related field all day. Why did I stop playing any instruments? I used to enjoy ensemble performance. Should I be turning back toward it, or is this kind of sentimental response in me a hint that it was better to let it drift away?

Is music recommendation difficult?

My research department works on programming computers to analyse music.

In this field, researchers like to have some idea of whether a problem is naturally easy or difficult for humans.

For example, tapping along with the beat of a musical recording is usually easy, and it’s fairly instinctive—you don’t need much training to do it.

Identifying the instrument that is playing a solo section takes some context. (You need to learn what the instruments sound like.) But we seem well-equipped to do it once we’ve heard the possible instruments a few times.

Naming the key of a piece while listening to it is hard, or impossible, without training, but some listeners can do it easily when practised.

Tasks that a computer scientist might think of as “search problems”, such as identifying performances that are actually the same while disregarding background noise and other interference, tend to be difficult for humans no matter how much experience they have.

Ground truth

It matters to a researcher whether the problem they’re studying is easy or difficult for humans.  They need to be able to judge how successful their methods are, and to do that they need to have something to compare them with.  If a problem is straightforward for humans, then there’s no problem—they can just see how closely their results match those from normal people.

But if it’s a problem that humans find difficult too, that won’t work. Being as good as a human isn’t such a great result if you’re trying to do something humans are no good at.

Researchers use the term “ground truth” to refer to something they can evaluate their work against. The idea, of course, is that the ground truth is known to be true, and computer methods are supposed to approach it more or less closely depending on how good they are. (The term comes from satellite image sensing, where the ground truth is literally the set of objects on the ground that the satellite is trying to detect.)

Music recommendation

Can there be a human “ground truth” for music recommendation?

When it comes to suggesting music that a listener might like, based on the music they’ve apparently enjoyed in the past—should computers be trying to approach “human” reliability? How else should we decide whether a recommendation method is successful or not?

What do you think?

How good are you at recommending music to the people you know best?

Can a human recommend music to another human better than a computer ever could? Under what circumstances? What does “better” mean anyway?

Or should a computer be able to do better than a human? Why?

(I’m not looking for academically rigorous replies—I’m just trying to get more of an idea about the fuzzy human and emotional factors that research methods would have to contend with in practice.)