Month: April 2026

Dein gutes Herz wird manchen Schmerz in diesen Grüften leiden

As usual, I’m late in highlighting this, but, last weekend, I was in T, the New York Times style magazine, attempting a vastly over-generalized, three-hour-tour précis of a thousand years of classical music. (Many thanks to reporter Mitchell Kuga and fact-checker Justin Simon for hammering my ramblings into concise coherence. Mitchell had to make sense of an interview done while I was doped up on DayQuil! That’s hazard pay.)

Part of the interview was the fool’s errand of picking pieces to stand in for entire musical epochs. (Luckily, I am a fool.) I think I came up with a decent enough list, but there is nothing like that exercise to rub one’s nose in the overwhelming dead-white-guy nature of classical-music history. So that nature, and efforts to change it, have been on my mind as of late, along with some recent news about one of those efforts that went rather sideways. And it reminded me of an idea that originated in the world of economics.

(Anyone who’s followed this blog since its early days might wonder why I’m so interested in economics. I think it’s because, more than almost any other field of inquiry, economics has a tension between theoretical elegance and stubborn real-world messiness that’s both profound and intractable, to the point that you will often see even genius-level practitioners tripping over it. It’s good practice for thinking about other subjects where the hazards of that tension might not be so obvious. Music, for instance.)

Anyway, this particular idea has its origins in monetary policy: the ways that government and quasi-governmental institutions—the Federal Reserve, the Bank of England, the European Central Bank, and so on—try to exert some control over market cycles and economic conditions. (You can skip this history and jump ahead three paragraphs to the payoff, but I think the history is fun.) In the early 1970s, the Bank of England suddenly found itself at sea. Since 1944, the value of the British pound had been dependent on that of the US dollar, but, in 1971, the Nixon administration announced that dollars would no longer be convertible to gold, making the dollar a floating currency, its value allowed to vary according to the free market. The UK followed suit a year later. With the pound now subject to the vagaries of the money markets, the Bank of England needed a new framework for managing the British economy.

What the Bank turned to was the supply of money. Analysis of data from the previous decade had shown that, when interest rates were higher, the amount of money in the system grew more slowly; and when the amount of money grew slowly, inflation (a pressing problem in 1970s Britain) was kept in check. So the Bank decided to focus on the money supply, adjusting interest rates to keep the money supply from growing too rapidly.

Except it didn’t work. While Edward Heath’s Conservative government pursued aggressive economic growth policies throughout 1971-72, the Bank of England kept its eye on the money supply and, sure enough, when it started expanding in 1973, hiked up interest rates to forestall a spike in inflation. But the broad money supply kept getting bigger and bigger. In a 1975 paper (reprinted here), Charles Goodhart, then an economist at the Bank, tried to make sense of it all. Money-supply calculations that had been stable and predictable under the previous, dollar-backed monetary scheme turned out to be not so stable when currencies were allowed to float. And, Goodhart admitted, focusing solely on the Bank-set interest rate rather than the differences between that rate and the yields financial institutions could obtain with other investments or in other markets missed an ongoing speculative fever that had kept the economy hot.

But the most enduring takeaway from Goodhart’s analysis was one he only hinted at: that, in turning the state of the money supply from something that the Bank measured into something the Bank pursued as a quantified goal, the economic wires somehow got crossed. He somewhat jestingly dubbed it “Goodhart’s Law”:

any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes

Goodhart’s Law has now spread far and wide, into a variety of disciplines. The form in which I’ve most often seen it was, as far as I can tell, first formulated by accountant and academic Keith Hoskin:

every measure which becomes a target becomes a bad measure

I don’t know that I agree with the blanket value judgement in that version—measures-turned-targets are still telling you something. But in slightly modified form—

when a measure becomes a target, it is no longer measuring quite the same thing that it measured before

—I think that Goodhart’s Law is absolutely and universally true.

#

Which brings us to Florence Price. If you’ve been in and around the classical-music world over the past fifteen years or so, you know Price and her story: a black American, late-Romantic composer in the first half of the 20th century who enjoyed some modest early success, then, in large part due to her race and gender, fell into comparative obscurity in mainstream classical-music circles, but whose legacy was posthumously jump-started by the discovery of a cache of unpublished manuscripts in 2009. 

If I ran the world, the main applied lesson from the Florence Price “re-discovery” would be this: there are a thousand more Florence Prices out there right now—potential composers with exceptional talent and imagination—who, like Price, have not had opportunities to utilize and develop that talent because of who they are, or what they look like, or where they were born, or their socio-economic status; and legacy classical-music institutions—ensembles, publishers, schools—ought to be establishing programs and protocols and methods to better find that talent and provide those opportunities. Instead, by and large, my sense is that the main reaction among organizations has been a retroactive canonization of Price and her music, adding her to the pool of composers likely to appear on classical-music programs.

This is not an unwelcome development, but it is a remedy that, in comparison, minimally disrupts the classical-music status quo. (I wrote about this vis-à-vis superhero narratives a while ago.) It also, it turns out, brings Goodhart’s Law into play. Because how do you measure whether an ensemble or a presenting organization is meeting the goal of diversifying their programming? The most obvious way is to keep a tally of whose music you’re programming, and how often. But if you start to conflate the tally and the goal, now the tally isn’t measuring exactly what it was measuring before you started trying to change the tally. This might seem like such a subtle shift that it wouldn’t ever matter in the real world. And I think that, for the most part, it doesn’t. Except when it does.

This year’s installment of that wonderfully frothy tradition, the waltz-soused Neujahrskonzert by the Vienna Philharmonic, was guest-conducted by Yannick Nézet-Séguin, music director of the Montreal Orchestre Métropolitain, the Philadelphia Orchestra, and the Metropolitan Opera. Nézet-Séguin has been a prominent champion of Price’s music, including conducting recordings of her three surviving symphonies, so it’s not surprising he was eager to have Price on his Viennese program as well. She had written a “Rainbow Waltz,” in 1939, a gentle, charmingly wayward wisp for piano solo.

Here’s the “Rainbow Waltz” that was played in Vienna, as arranged by Wolfgang Dörner:

I will not buy this record; it is scratched. I mean, Price’s music is in there somewhere, but, then again: wie bitteAnd that’s before it came out that, originally, the Vienna Philharmonic, at Nézet-Séguin’s behest, had commissioned Valerie Coleman, another black female composer, to arrange Price’s waltz, only to reject Coleman’s version and turn to Dörner.

Now, I don’t know what the powers-that-be at the Vienna Philharmonic were thinking in vetoing Coleman’s arrangement, or what Dörner was thinking in making his arrangement (though Hannah Edgar has some valuable clues). I don’t know what Nézet-Séguin was thinking in going along with it. There are a number of things going on in this story, many of them not great. But among them, I think, is an example of Goodhart’s Law biting, and biting hard. To introduce new audiences to Price’s music is fine, but to do so in the novel and not-immediately-relevant context of a Vienna New Year’s Concert is, on some level, ticking off a box, and if you’re not careful, the box itself starts to crowd out its own reason for being ticked, in ways small and big—in this case, to the point of overriding common sense.

The story is an extreme example. Extreme examples are instructive, though. A story like this doesn’t mean that you stop measuring. But it should be a hint that, as the Bank of England discovered, changing the measurement not only does not necessarily change the system, it often leads to gaming the system. If a first-order measurement of classical programming diversity is seemingly improving, but the classical-music industry is still ill-serving under-represented composers, as in the Price example, it’s not a sign that the effort is futile, but that other measurements—and changes—need to be made. Systems are holistic in ways that measurements aren’t, and a lot of mischief can happen in the gap.

A few years ago, flutist and data scientist (not an unprecedented combination, incidentally) Mansi Shah surveyed that gap, and some tactics for not falling into it, concluding:

Maybe instead of asking, “Are we there yet?” and solely focusing on the outcomes (as overdue as they might be), we need to be asking, “What are we working toward, and how are we getting there?”

That’s completely sensible, right? It’s also harder. It means engaging with root causes, not just seemingly easy fixes; it means putting in time, work, and money, without the short-term satisfaction or even the guarantee of a nice, clean, positive data point to show off to critics or donors or board members or the like. That’s the challenge inherent to Goodhart’s Law: you have to measure to maintain accountability, but you can’t conflate accountability with just a measurement.

It is, it turns out, easy to miss the forest of change for the trees of data. That connoisseur of forests, Henry David Thoreau, had some sense of the hazard. “Of course it is of no use to direct our steps to the woods, if they do not carry us thither,” he wrote. “I am alarmed when it happens that I have walked a mile into the woods bodily, without getting there in spirit.”