The Apple textbook

Apple have announced a free application for making textbooks, along with a push to provide commercial textbooks from existing publishers through their iBooks delivery medium.

It looks as if commercial books produced in this way will remain entirely restricted to reading on Apple hardware.  (There’s more flexibility for free books.)

Apple have had an increasing reach in educational settings for a while now, particularly in the US but also in many UK schools. But of their various pieces of educational wheeling and dealing, this has had by far the most publicity, and so far it hasn’t all been good.

This move has a slightly sinister feel. I can hope that consumers might prove less sanguine about locking their children in to proprietary systems than they are about locking themselves in. Perhaps, in hindsight, we will see this as the point where the wider perception of Apple started to creep across the line from ubiquitous but helpful to insinuatingly controlling.

I doubt it, though. Textbooks are a mess, expensive and cumbersome for the reader and impenetrable as a market. This is a move to make textbook production simpler, make textbooks cheaper, and bring more of the visual and interactive mechanisms children are becoming used to into the formal school environment—while working with technology that many schools are already using and more are considering. What could seem better?

This combination of an immediately appealing proposition with an unprecedentedly strict control regime is an Apple hallmark. I regret the failure of more open systems to make more inroads into education—a failure I feel like I’ve played my own part in. Even so, we can try to resist a little by at least encouraging variety. To encourage our children to use every kind of system, to explore, to share, to build, and to understand that a computer is a complex human and social construction in itself rather than just an enabling object.

Why the proposed US copyright regulations should worry UK citizens

Referring to today’s 24-hour Wikipedia blackout in protest against proposed US copyright regulations, a colleague at work asks:

Could someone explain to me why wikipedia et al wouldn’t just move hosting to a different country if they have issue with US regulations… this blackout kind of implies that US law regulates the whole internet

A site like Wikipedia is unlikely to be in any position to relocate, given that it’s run in the US by a US-based foundation and has many US editors, but for those of us in the UK with more modest sites this is a legitimate question. Why worry?

You may in fact fall under US regulation

The proposed regulations divide the Internet into “domestic” sites, which are considered to be US-based and so to fall under US regulation, and “foreign” sites, which are all the others.

The definition of a “domestic” site is brief, but not without ambiguity: it’s a site with a domain name  registered or assigned by a US registrar, or (if it has no domain name) a site hosted in the US.

I can’t tell whether that means names whose top-level domains have US-based sponsoring registrars, including all .org, .com and .net domains, or only those whose registration was carried out by a US-based registrar. Either way it will cover quite a high proportion of sites being run outside the US at present. I’m also unsure whether non-US domains such as .co.uk might be considered domestic if they were registered through a US registrar.

Even if you don’t, these laws are intended to affect you

One of the “selling points” of this legislation is that it imposes effective controls on foreign sites as well as domestic ones.

Provisions are included to require infrastructure sites within the US, such as search engines, payment processors, or ad networks, to remove access to or stop working with any foreign sites deemed infringing. The US still operates much of the Internet’s infrastructure and is the biggest market for many of its services. This could be a big problem for many sites even in places that don’t formally consider the US to be the centre of the world.

There’s no effective comeback

The question of whether a foreign site is “infringing” or not would be determined in US courts, and the only way to argue it would be in US courts. That might not be something anyone outside the US would wish to do.

The US has a record of targeting small-scale infringers

It’s tempting to think that none of this would apply to any of us unless we start running sites that intentionally host pirated material. Unfortunately, the US has a track record of aggressively pursuing action against individuals for relatively minor infringements (see 1, 2, 3, etc). It’s not unreasonable to fear that a general blog-hosting site in the UK, or any site that permits comments, or a research site that refers to audio or video media, could end up being harshly punished for something it never intended.

Afraid, or just concerned?

It’s possible that none of this would affect any of us, in practice.

But it’s also possible that these regulations might be more of a headache for people outside the US than for anyone within it, given their explicit provisions to deal with foreign sites and lack of recourse for foreign site operators, and the concentration of Internet resources and facilities inside the US. If Americans are worried, we should at very least be keeping a wary eye open as well.

See also

Update: I had missed this article on The Verge which answers and clarifies several of the things I had wondered about, and also makes the situation look even worse from a UK perspective.

Marmalade!

Seville oranges are in season, so it’s time to make marmalade. I love making marmalade (and fortunately I also like eating it, though I’m the only person in my household who does).

This is a straightforward light, tangy sort. Have seven or eight jam jars washed and ready, and keep them hot in the oven while you prepare the marmalade.

Squeeze and shred a kilo of Seville oranges, keeping the pips out of the shredded peel.

Drop the peel into a big pan, with a couple of litres of water and the juice of a lemon. Bag up the pips in a muslin sheet and suspend it in the liquid.

(Safety-conscious man says: soak the muslin in water before you do this, so it doesn’t catch fire if you accidentally dangle it over a gas burner)

Simmer for at least an hour, uncovered, until the peel is soft.

Pull out the hot muslin bag and squeeze its juices into the pan with tongs or something.

Then add a staggering amount of sugar—about two kilos— and stir it in.

Turn up the heat quite fiercely and boil until it’s “ready to set”. It could take ten minutes, or an hour or more.

This is where it often goes all wrong. You’re aiming to capture a state in which the mixture is fairly solid when cool but melts when heated, as on toast. The marmalade will always reach this point sooner or later, but not for long before it enters the subsequent “bouncy state”. You don’t want that.

To get the right set, regularly stick a spoon in the mixture, stir it and pull out some of the liquor, then let it cool a bit. If the cooled mixture wrinkles up when you push it with a finger, then it’s ready.

What usually happens to me is that I notice that the utensil I stuck in it last time I tested has gone wrinkly and gelatinous when I come to test it this time. That means the moment is passing, but it’s not too late if you test fairly often and put the heat off as soon as you notice it.

Let cool for a few minutes, then ladle into the jars you prepared, and seal.

Finally! A breakfast-related post on the Breakfast Post.

Is music recommendation difficult?

My research department works on programming computers to analyse music.

In this field, researchers like to have some idea of whether a problem is naturally easy or difficult for humans.

For example, tapping along with the beat of a musical recording is usually easy, and it’s fairly instinctive—you don’t need much training to do it.

Identifying the instrument that is playing a solo section takes some context. (You need to learn what the instruments sound like.) But we seem well-equipped to do it once we’ve heard the possible instruments a few times.

Naming the key of a piece while listening to it is hard, or impossible, without training, but some listeners can do it easily when practised.

Tasks that a computer scientist might think of as “search problems”, such as identifying performances that are actually the same while disregarding background noise and other interference, tend to be difficult for humans no matter how much experience they have.

Ground truth

It matters to a researcher whether the problem they’re studying is easy or difficult for humans.  They need to be able to judge how successful their methods are, and to do that they need to have something to compare them with.  If a problem is straightforward for humans, then there’s no problem—they can just see how closely their results match those from normal people.

But if it’s a problem that humans find difficult too, that won’t work. Being as good as a human isn’t such a great result if you’re trying to do something humans are no good at.

Researchers use the term “ground truth” to refer to something they can evaluate their work against. The idea, of course, is that the ground truth is known to be true, and computer methods are supposed to approach it more or less closely depending on how good they are. (The term comes from satellite image sensing, where the ground truth is literally the set of objects on the ground that the satellite is trying to detect.)

Music recommendation

Can there be a human “ground truth” for music recommendation?

When it comes to suggesting music that a listener might like, based on the music they’ve apparently enjoyed in the past—should computers be trying to approach “human” reliability? How else should we decide whether a recommendation method is successful or not?

What do you think?

How good are you at recommending music to the people you know best?

Can a human recommend music to another human better than a computer ever could? Under what circumstances? What does “better” mean anyway?

Or should a computer be able to do better than a human? Why?

(I’m not looking for academically rigorous replies—I’m just trying to get more of an idea about the fuzzy human and emotional factors that research methods would have to contend with in practice.)

 

End of the laptop line

I realised not long ago that, for my purposes, laptop PCs have stopped improving.

It didn’t happen recently: it just took me a long time to notice.  In fact I reckon it happened about five years ago.

My decline and theirs

Sony Vaio PCG-R600MX (2002)The first laptop I bought with my own money was a Vaio R600MX in 2002. It must have cost about £1400. A lovely hardware design, it had a beautiful case and keyboard and a clear 12.1″ screen, but it was noisy, even for the time it was slow, battery life wasn’t good, and the screen was only 1024×768.

Still, it was easy to carry, and that’s the first thing I looked for in a laptop because I only used it when on the move. I stuck with similar criteria for years after that, up to a Vaio Z in 2010.

But the way I work has changed during the last five years or so: I lost the desk and desktop PC I had at home when the space was upgraded to a chest of drawers; I do less number-crunching than I used to, and rely less on the power of a desktop machine. I can “get away with” using a laptop more.

I now have most of my data online, so I no longer have any need to carry the same computer between work and home.  And having a family I travel less.  I haven’t left Europe since 2002, meaning that first Vaio is still the only computer I’ve ever tried to use on a plane.

So I now work on a laptop far more than I used to, but it doesn’t actually have to move about as much.

During the same five years, something bad has happened to laptops.

Screens have got shorter and shorter and gone all shiny. Keyboards have turned flat and featureless. The hardware has got faster, but quite a bit of that is down to solid-state drives—which you can retrofit in any machine. For the former me, an 11″ MacBook Air would have seemed like the ideal machine: to the current me, it starts to look a bit fiddly.

When all this eventually dawned on me, I made a couple of trips to the Queensway computer market and to eBay and discovered that a Thinkpad T60, made in 2007, now costs about £150.

Quadratisch, praktisch, gut

There are machines that do individual things better than the T60, but nothing else I’ve found yet is so consistently nice to use.

Thinkpad T60The 14″ non-widescreen high resolution display! All those lines of text!  Funny to think this was once commonplace.

A proper bumpy keyboard!  And a good one, if not quite your Sun Type-5.

Of course it’s not fast as such, but it was certainly fast “only” five years ago, and it’s good enough, especially with another 70 quid spent on an SSD, to feel broadly contemporary rather than totally antique.

(Software no longer seems to bloat as rapidly as it used to, either because I’ve been fixed in the same tasks and development environments for too long, or because the increasing proliferation of lower-level general-purpose hardware and the limitations of Moore’s law have moderated other developers’ ambitions.)

Very solidly built; easy to find spare parts and replacement batteries; battery life isn’t bad. The styling is a bit divisive, but it appeals to me.

Finally, the T60 was the last Thinkpad that actually said IBM on it. I’m a sucker for that.

And a hundred and fifty quid!  Just writing it makes me want to go and buy another… although even at that price, I can’t currently afford to. Even so, it puts dramatically into perspective the amount I’ve spent on new hardware over the years.

Is this just because I’m becoming obsolete along with the computers I use? Is it an affectation that I’ll forget all about next time something really shiny turns up? Or is it a symptom of the PC age running out of appealing novelties?