Yes, Soho the Dog HQ is moving again, this time back to my original, once-familiar, probably-not-all-that-familiar-anymore hometown. Come mid-October, at least I won’t have to mail-order poppy-seed hot dog buns. (The move is occasioned by my wife—still and always the brains of this operation—landing a new job at Chicago Public Media. I myself will be doing… something or other.)
This, of course, will also mark a fond, albeit oddly liminal, farewell to Washington, D.C. It’s strange to move somewhere new, just start to find your footing, and then watch as a pandemic shuts everything down and your sense of place shrinks to something not much larger than your house. But even if my experience of public, monumental D.C. was curtailed, I will still miss the merits of quotidian, lived-in D.C.—the subtleties of neighborhood, the piecemeal assembly of arosteroffavoritetakeout, the surprising amount of primordial nature still tucked in the odd corners of the district, the serendipitous counterpoint of friends and acquaintances, both tied-down and transient in a peculiarly-D.C way.
(On the other hand: the weather here in August? I, um, appreciate it, but I won’t miss it.)
The philosopher Jean-Luc Nancy died last month. I came to Nancy’s work late, only after meeting filmmaker and fellow Radcliffe Fellow Phillip Warnell, who collaborated with Nancy on three wonderful-in-every-sense films: The Flying Proletarian, Ming of Harlem, and Outlandish: Strange Foreign Bodies (the latter leaving a small but crucial mark on my Horror of Fang Rock analysis). In short order, I read everything of Nancy’s in English I could find. The effect was rather like meeting someone who had thought about a lot of the same things I had always thought about, but had gotten closer than I ever would to their essence while seeming to expend far less effort. But what I found most compelling about Nancy’s writing and thinking was how he, tacitly or otherwise, acknowledged that language could only get you so far in expressing an idea, that there was always going to be a gap—and how he actually leveraged that gap in both analytical and expressive ways.
A compact but consistently effervescent collection of Nancy’s writing on music has been translated into English by Charlotte Mandell under the title Listening. I think about this passage a lot.
It is not a hearer, then, who listens, and it matters little whether or not he is musical. Listening is musical when it is music that listens to itself. It returns to itself, it reminds itself of itself, and it feels itself as resonance itself: a relationship to self deprived, stripped of all egoism and all ipseity. Not “itself,” or the other, or identity, or difference, but alteration and variation, the modulation of the present that changes it in expectation of its own eternity, always imminent and always deferred, since it is not in any time. Music is the art of making the outside of time return to every time, making return to every moment the beginning that listens to itself beginning and beginning again. In resonance the inexhaustible return of eternity is played—and listened to.
Miura Tamaki was 52 years old when, in 1936, she finally sang Madama Butterfly in her native Japan. The soprano had already performed the role of Cio-Cio-San, she would claim, some two thousand times in Europe and the Americas—surely a record for typecasting—but Japanese musical audiences, understandably, had been slow to accept Illica and Puccini’s orientalist romance, preferring to keep it at a figurative or literal distance. A 1930 performance by the Japan Opera Association went so far as to thoroughly alter and even cut many of the exotic, quasi-Japanese touches in the score and libretto, downplaying the opera’s attempted evocation of Japanese atmosphere in favor of an unassuming realism. A reviewer in the Hōchi Shinbun approved: “Now we can watch Madam Butterfly in peace.”
Miura’s approach was different. At the outset of her career, she had written that Madama Butterfly’s version of Japanese culture was “not merely extremely strange but rather infuriating.”Nevertheless, her Western celebrity had been predicated on the legitimacy she lent to Madama Butterfly, her nationality providing a veneer of authenticity, her authority bolstered by the approval of Puccini himself, who, upon meeting Miura, had supposedly pronounced her the embodiment of the Cio-Cio-San in his imagination. (Mari Yoshihara has explored the irony of Western audiences judging Miura, a career-minded divorcée who defied Japanese expectations for women, to be a quintessence of Japanese femininity.) Miura’s first Butterfly in Japan, performed in her own Japanese translation, promised “a new standard for the opera”: Cio-Cio-San’s faithfulness was not born of naïveté, but of honor and commitment, her tragedy not a consequence of cultural misunderstanding, but of American treachery. It was an interpretation, as Arthur Groos has written, “noteworthy for its reflection of the conservative, and ultimately xenophobic turn of Japanese society in the 1930s”. Once the Second World War broke out, Miura adopted Japanese government proscriptions against Western music, putting her Butterfly on the shelf.
After the war, however, Miura returned. Despite her rapidly failing health, she gave a farewell recital in Tokyo in 1946, full of European repertoire—Schubert’s Die schöne Müllerin (again, in her own Japanese translation), Home, Sweet Home. She then made a series of recordings for radio broadcast, a sort of apostrophized review of her life and career. She performed the Butterfly arias one last time, and once again reminisced about her meeting with Puccini, re-emphasizing her role as a bridge between East and West. When she died, only a few weeks after those final recording sessions, press coverage was extensive. Japanese newspapers that once chastised Miura for her rejection of traditional female duties had followed her struggle and decline in sympathetic detail, tragic parallels with her most famous role, both personal and national, seeming to play out in real life. The story had shifted once more.
In a fascinating study, Kunio Hara has analyzed this complicated re-negotiation of Miura’s significance and reputation in Japan:
[T]he equation of Japan with the dying Miura, and by extension with Cio-Cio-San, also put the Japanese people squarely in the position of the victim. Such a stance prevented Miura and Japanese commentators from dwelling much on the atrocities their countrymen had perpetrated outside of and within Japan toward non-Japanese populations and each other during the war. The overlapping tales of Miura, Cio-Cio-San, and Japan, therefore, captured the inwardly-focused sense of shame, regret, and victimization shared by many Japanese during the early stages of the Occupation. At the same time, Miura’s renewed engagement with Madama Butterfly symbolized a renewed embrace toward cosmopolitanism and a commitment toward the cultivation of peaceful cultural life.
Stories that get told and re-told are, in essence, about power. Butterfly, with its overtones of colonialism and paternalism, its overly neat division of the world into irreconcilable East and West, has always been a particularly fraught example. But such stories are, always, shorthands—for the assumptions and desires and dreams of whoever tells them. They’re assertions of narrative control over a lived experience. And when, in discussions of the cultural and political connections and contradictions among Japan and the United States and Europe, Butterfly pops up (and it almost always pops up), it’s worth asking over what, exactly, the storyteller is trying to assert control. This is especially true when the story pops up in a particularly and inappropriately formidable context.
As a curious (in more ways than one) 12-year-old, one of my favorite books was a hefty piece of speculative fiction called The Third World War: August 1985, by General Sir John Hackett, and “other top-ranking NATO generals & advisors,” as the cover read. First published in 1978, The Third World War imagined a Soviet invasion of western Europe that escalated into a global conflict between the Sino-Soviet bloc and the American-European NATO alliance. My dad had a copy. A lot of people in the late 70s and early 80s had a copy: it was a bestseller. (An expanded and revised quasi-sequel soon followed.) Hackett, an Australian-born, high-ranking British Army officer, intended the book to be a cautionary tale; he hoped to build public opinion and political support for increased support of NATO’s conventional forces, as opposed to what he saw as a European over-reliance on nuclear deterrence. So it was propaganda. But it was well-researched, well-informed propaganda—Hackett’s co-authors included four other retired British military officers, the British diplomat Sir Bernard Burrows, and Norman Macrae, a journalist, futurist, and long-time fixture at The Economist. The result was authoritatively-detailed, acronym-laden catnip to a technologically-minded and slightly obsessive 12-year-old boy.
I hadn’t thought about The Third World War in years, but after I started reading a new volume that covers a lot of the same ground on 21st-century terms, I revisited it, along with this article by Jeffrey Michaels of King’s College London, which provides more depth on Hackett’s book, his thinking around it, and the process by which it came to be. For one thing, the fictional war’s denouement changed. Hackett’s original conception avoided any nuclear attacks on cities, thinking that such weapons would be only be used tactically in sea or space battles. But, in the published version, the Soviets, facing defeat at the hands of American and European conventional forces, detonates a nuclear warhead over the British city of Birmingham. NATO responds with its own nuclear attack, destroying the then-Soviet city of Minsk, leading to a political crisis in the USSR that hastens the end of the war. This plot twist—a little surprising, given Hackett’s goal of boosting NATO’s conventional fighting capacity—did make for a dramatic climax, with the attack on Birmingham being perhaps the book’s most famous set-piece. (The chapter, actually written by Royal Engineers officer David W. Williams, drew heavily on a then still-secret Ministry of Defence study of the effects of such an attack.) But the imagined nuclear exchange also attracted some of the book’s most consistent criticisms, with reviewers unconvinced that either the Soviets or NATO would show such restraint in either their targets—smaller industrial cities rather than political or cultural centers—while deliberately avoiding further nuclear escalation.
In an early draft of the chapter describing the NATO retaliation (quoted in Michaels’ paper), Sir Bernard Burrows offered more explanation. For NATO to bomb Moscow or Leningrad would have been, in Burrows’ opinion, too provocative a response. “An important provincial city was required,” Burrows wrote, “far enough from the capital so that no direct physical effects would be felt there, but near enough for immediate political repercussions on the seat of government.” Minsk ticked off those boxes, but Burrows also explained the city’s possible “deeper significance, of which those in the allied targeting section may or may not have been aware”: Minsk, Burrows noted, was where Lee Harvey Oswald (“or his look-a-like”) had lived after defecting to the Soviet Union, before he returned to the US and, eventually, assassinated John F. Kennedy. The idea prompted Burrows to some psychological speculation:
A study might be of interest, hopefully rather ephemeral, on the secret motivation of those who select nuclear targets. Was the choice of Nagasaki for the second American bomb in World War II influenced by atavistic memories that this was the home of Cho-Cho-San in Madame Butterfly, and by the unconscious desire to wipe from the map a monument to the shameful behavior of Lieutenant Pinkerton, USN?
If a random person raised this question, I might find it of passing, albeit rather tasteless, interest. If a former UK permanent representative to NATO and inaugural member of NATO’s Nuclear Planning Group raises the question, I’m intrigued. Did Burrows suspect (or know) something that he couldn’t or wouldn’t divulge?
Here’s the thing: nobody really knows how and why Nagasaki became a target for the first atomic bombs at the end of World War II. It’s well-known that Nagasaki wasn’t even the primary target on August 9, 1945; the B-29 Bockscar, piloted by Major Charles Sweeney, had planned to bomb the city of Kokura, but finding it unexpectedly obscured by either clouds or a deliberate smokescreen (or both), flew to Nagasaki instead. But, even as a secondary target, Nagasaki only appeared on the target list late, and under somewhat mysterious circumstances.
Officials didn’t even start thinking about where to drop the bombs until April of 1945, when a Target Committee first met at the Pentagon. (The committee was nominally chaired by the Manhattan Project’s military head, General Leslie Groves, but this first meeting was largely run by Groves’ right-hand man, Brigadier General Thomas Farrell. Among the civilian scientists on the committee were William Penney, who ran the British atomic bomb program, and the legendary polymath John von Neumann, who would later formulate the massive-proliferation, ICBM-based deterrence framework known as Mutual Assured Destruction.) The Committee’s criteria were particular: a large enough city, ideally untouched by conventional bombs (the better to both demonstrate and gauge the atomic bomb’s impact and effects), but also with enough military value to (in theory) justify what was anticipated to be a substantial loss of civilian lives. Those requirements left the committee in something of a race against Army Air Force General Curtis LeMay, whose firebombing strategy had been leveling Japanese cities, one horrific firestorm after another, “with the prime purpose in mind,” as the minutes of the Committee’s first meeting noted, “of not leaving one stone lying on another”. At the end of the meeting, the Committee had decided that seventeen Japanese targets were worthy of further study, with Nagasaki well down the list.
Two weeks later, in May of 1945, the Target Committee reconvened in J. Robert Oppenheimer’s office at Los Alamos, and settled on five targets “which the Air Forces would be willing to reserve for our use”: Kyoto, Hiroshima, Yokohama, Kokura, and Niigata. The minutes from this meeting make clear that the Committee was eager to maximize all of the bomb’s effects, emphasizing those targets that would yield the largest physical and psychological impact:
Kyoto has the advantage of the people being more highly intelligent and hence better able to appreciate the significance of the weapon. Hiroshima has the advantage of being such a size and with possible focusing from nearby mountains that a large fraction of the city may be destroyed.
The Target Committee’s preferences stayed consistent: Kyoto first, Hiroshima second, and Nagasaki unmentioned, left for LeMay to (presumably) raze to the ground. But those preferences were overruled by the Secretary of War, Henry Stimson, who demanded that Kyoto be struck from the list. Stimson, born in 1867, epitomized the old guard of American politics, having previous served as Secretary of War in the administration of William Howard Taft. A long-time fixture in New York and national Republican politics, Stimson had been appointed by FDR largely to forestall opposition to the US entering the war; despite his age (and FDR’s sometime wish to replace him), Stimson had prosecuted the war with vigor, but, by the spring and summer of 1945, was beginning to wear down. What drove Stimson’s adamancy regarding Kyoto remains unclear. In making his case to other officials and to the new president, Harry Truman, Stimson’s emphasis was, ironically, the same as the Target Committee’s: morale. Destroy Kyoto and its shrines and treasures, Stimson argued, and the Japanese people already would be turned against the US even before any American occupation of the country.
But historians have long wondered if there were other, psychological dimensions to Stimson’s decision. The internet is full of mentions of Stimson honeymooning in Kyoto, which doesn’t seem to be true, but he did spend several days in the city in 1926, when he was serving as Governor-General of the Philippines. Perhaps more important was Stimson’s own morally and politically compromised position—he was uneasy about the increased reliance on massive aerial bombardment, apprehensive at the prospect of the atomic bomb, and desperate to exert some measure of authority over a wave of destruction that he now saw, with good reason, as proceeding beyond his control. (General Groves recalled Stimson objecting not only to Kyoto, but to all of the Target Committee’s recommendations, a sign of Stimson’s discomfort with the thin military justification for the largely-civilian targets.) Once the bomb was ready, it was going to be used, to end the war, to justify the cost of its development, to forestall a Soviet invasion and occupation of Japan, to assert American power as the world moved into profoundly unsettled postwar geopolitics—or all of the above, or none of the above.
(Another story: after the war, a narrative spread that the true savior of Kyoto was Langdon Warner, a Harvard University art historian and expert in Chinese and Japanese art, with the rumor eventually finding publication in the Asahi Shimbun. Despite Warner’s insistence that he had nothing to do with the Kyoto decision, the Japanese accepted the story as true, to the point of erecting memorials to Warner in Kyoto, Nara, and Kamakura. They interpreted Warner’s denials as admirable modesty.)
For much of June and July of 1945, Stimson and military officials jockeyed for administrative advantage, the paper trail still keeping Kyoto as a target, Stimson strenuously objecting at every turn. (Groves even attempted a little guerrilla office warfare, slyly keeping Kyoto on the list of cities off-limits to conventional bombing—while pointedly refraining from adding a substitute target.) But it was only at the end of July, at the pivotal US-UK-Soviet conference at Potsdam, that Stimson finally was able to corner Truman and convince him to preserve Kyoto. General George Marshall, the Army Chief of Staff, had his deputy, General Thomas Handy, circulate a draft order for the use of the atomic bomb. (This and the following documents can be found in the declassified Correspondence (“Top Secret”) of the Manhattan Engineer District, 1942-1946.)
And Nagasaki. That’s Groves’ handwriting, though whether he was making the correction on his own or based on consensus is not clear. (When the new information was collated into a memo by Col. John Stone, a war planning staffer on the Joint Chiefs of Staff who was part of the Potsdam entourage, Stone indicated that the plan was “drafted by Groves,” but kept the verbs in the actual plan scrupulously passive.) There’s another interesting wrinkle here, preserved in cable traffic between Handy and General Carl Spaatz, the commander of Army Air Forces in the Pacific. Spaatz to Handy:
(Spaatz was in error about Hiroshima, incidentally; a dozen American POWs died in the bombing and its aftermath, including at least two who survived the blast but were beaten to death by angry residents.) Handy replied that the plan and target priority remained unchanged—a reply edited and marked up by at least two other hands, including Groves’ distinctive looping script:
None of this made it into the final directive, but the reappearance of Osaka as a possible target—along with Amagasaki and Omuta, neither of which were on even the Target Committee’s initial list—is striking evidence of how fluid and ad hoc the process had been and had become. That such targets were being specified as “much less suitable” is also interesting; I wonder if the notoriously stubborn Groves substituted Nagasaki for Kyoto as a last-ditch effort to get Stimson to budge, by showing how poorly other available targets fulfilled the Target Committee’s considered requirements.
Then again, I can’t help thinking that all these competing and conflicting justifications—the scientists’ wish for a controlled and successful experiment, Groves’ wish for bureaucratic advantage, Truman’s wish to end the war on American terms, Stimson’s wish for postwar goodwill (or an unsullied memory)—were just substitutes for the more primal instinct I sense in Stimson’s mindset: the wish for some illusion that they were still steering the machine of death they had built. In comparison with all this history, Burrows’ passing notion—attributing the target selection to some sort of opera-induced subconscious shame—seems even more frivolous. But human beings made the decision to develop a weapon that managed to surpass even World War II’s level of prolific, mechanized carnage, a weapon with the capacity wipe out the species, then made the decision to use it. Even the most plausible reasoning can feel inadequate to the resulting horror.
I did some due diligence and searched for evidence that one or more of the players in this drama had some particular predilection for opera. Groves’ wife was an amateur singer and a music lover, but it was an interest Groves himself very much did not share. Stimson was a Long Island neighbor of Otto Kahn, the financier and philanthropist who was president and chairman of the Metropolitan Opera, but their interactions seem to have been limited to various non-musical board and committee meetings. I did, however, learn that, early in his career, Burrows had been posted to Egypt and, while there, became close friends with the novelist Lawrence Durrell, then working as the British Embassy’s press attaché. Durrell later made Egypt the backdrop for his most celebrated work, The Alexandria Quartet, a series of four novels telling the same story of doomed love from a variety of viewpoints. When released, the books were notorious for their frank eroticism and emotionally heightened, even overblown language. But the novels are really all about narrative: how human beings tell stories in order to plumb, shape, and justify the past. In the first novel, Justine, the narrator, a struggling writer, joins a gnostic sect run by an Egyptian friend, Balthazar. But when the narrator compliments him on his mystical discourse, Balthazar skeptically demurs: “We are all hunting for rational reasons for believing in the absurd.”
In the audience at Tamaki Miura’s last recital was another writer, Yukio Mishima, his considerable fame still in its early development, his militaristic turn to nationalist politics and his shocking ritual suicide still many years in the future. Mishima transmuted the experience of the concert into a short story that further enveloped Puccini’s opera in shifting and competing narratives. In “Butterfly,” it is Kiyohara, a Japanese army veteran and widower, who attends Miura’s performance. “It was almost a magical feeling to hear a chirping bird coming out of this body weakened by disease and adorned like a table at a wedding banquet,” Mishima writes. “The clarity of her tone was almost a case of possession; she seemed to have opened her mouth in spite of herself for it to come out.” Kiyohara is so caught up in Miura’s legend that he starts to hear, in his mind, her Butterfly intrude on her performance of Die schöne Müllerin. He reinterprets the opera on the terms of his own solitude.
When she sang Un bel dì, vedremo, you could see the color of the sea appear in her eyes. From a crude cardboard ocean, authentic ocean spirits disembarked. Madame Butterfly’s eyes were no longer black like those of the Japanese. By dint of watching the sea, day after day, they had ended up taking on its color. But, as if from a premonition, just before the tragedy of the last act, where even her face, too, could have a sea-complexion, she glanced ecstatically at the blinding glare of the sea in broad daylight. At a ship that brings tragedy. It was Madame Butterfly’s transparent azure eyes that attracted it. What she was waiting for was not Pinkerton. In reality, it was tragedy. It was death.
Kiyohara is inspired to write to a former love, Hanako, a woman he wished to marry, but who married someone else while Kiyohara was away on wartime duty. In his letter, Kiyohara recalls their first meeting, some twenty years earlier, when they found themselves alone in a box at La Scala, watching a younger Miura as Cio-Cio-San. But the letter is a fiction. When Kiyohara runs into Hanako at a party some months later, she reminds him that she would have to be several decades older in order to have been at the opera with him. “You are full of imagination,” she teases him. “That’s wonderful. You could be a novelist.” At the end of the party, the host invites Kiyohara and Hanako to admire the panorama from his terrace; bombs have flattened the city and cleared the view all the way to the ocean. Kiyohara looks, and sees that an American ship has docked in the harbor.
(Fun fact (and a bit of foreshadowing): that’s the same amount of money the DuPont Company took for their fee for building and operating the nuclear reactor at Hanford, Washington that produced plutonium for the first atomic bombs.)
I’ve been thinking a lot as of late about classical music, as a category and an industry, and how it’s like comic books—specifically, how it traffics in a conception and manipulation of historical time that comic books also display. An esoteric connection, maybe, but one which might hold some implications for the reckoning that classical music always seems to need, and that always seems never to come.
In 1962, the Italian semiotician Umberto Eco published an article later translated into English under the title “The Myth of Superman.” The prompt for Eco’s analysis was the tension in superhero comics between the mythical stature of the superhero and the need for a supply of ever-new adventures. Myths, Eco notes, are usually closed narratives: the twelve labors of Hercules, for instance, are the same in every telling of the story. On the other hand, comic books require novelty on a weekly or monthly basis. Every issue, Superman is presented with a new problem to solve, which he does. “Consequently,” Eco writes, “the character has made a gesture which is inscribed in his past and which weighs on his future. He has taken a step toward death”.
To act, then, for Superman, as for any other character (or for each of us), means to “consume” himself. Now, Superman cannot “consume” himself, since a myth is “inconsumable”….
The hero of the classical myth became “inconsumable” precisely because he was already “consumed” in some exemplary action. Superman, then, must remain “inconsumable” and at the same time be “consumed” according to the ways of everyday life. He possesses the characteristics of timeless myth, but is accepted only because his activities take place in our human everyday world of time. The narrative paradox that Superman’s scriptwriters resolve somehow, even without being aware of it, demands a paradoxical solution with regard to time.
Hence the article’s original Italian title: “”Il mito di Superman e la dissolozione del tempo.” In order for Superman—or any other superhero—to retain a mythical aura but still remain the central character in a series of novel narratives, comic books dissolve boundaries that might delineate a strict chronology. Eco again:
The stories develop in a kind of oneiric climate—of which the reader is not aware at all—where what has happened before and what has happened after appear extremely hazy. The narrator picks up the strand of the event again and again, as if he had forgotten to say something and wanted to add details to what had already been said.
The word for this that we use nowadays is “continuity,” and, if you have any familiarity with comic-book fandom, you know that continuity is a big deal. And the peculiarity of that continuity—immutable and mutable at the same time—is recapitulated in classical music. Classical music has a similar demand for both myth and novelty, after all. And the “timelessness” of the canon rather depends on a certain temporal haziness not unlike that which Eco cites. If you reframe classical music’s supposed devotion to nostalgia as a cultivation of a particular continuity, and realize that said continuity is, in fact, selectively and purposefully discontinuous, a lot about the nature of classical music in history and the 21st century makes more sense.
The question of audition repertoire, for example, swirled about Twitter not too long ago.
It is a sign of classical music’s fraught relationship with the current moment that accusing an audition panel of gatekeeping can be something besides redundant. But what might look like laziness—you should learn this piece because your teacher did, and their teacher did, and their teacher did—is, in fact, an unusually pure statement of classical-music continuity. That connection across generations is, in large part, what classical music is. Classical musicians learn a Mozart concerto, or other standard repertoire, for the same reasons a comic-book fan learns that Superman is from Krypton, or that Bruce Wayne’s parents were killed, or that Peter Parker was bitten by a radioactive spider. It’s considered baseline knowledge for making sense of the rest of the corpus.
Which is not to say that it’s good! One of the problems with classical music is that, whether at point of origin or over time, a lot of misogyny and white supremacy has been baked into the continuity. Even if your relationship to the continuity is in definite spite of that inequality, it’s all too easy to inadvertently uphold the status quo. And here’s where I think the example of comic-book continuity can at least provide an entry into change. Because comic-book continuity changes all the time. That deliberately vague sense of chronology that Eco highlights—and that classical music, for all its arrow-of-progress narrative tropes, eminently shares—allows for all kinds of adjustments to the continuity.
What would that look like in a classical-music context? Exhibit A, I think, is Florence Price. The canonization of Price’s music is one of the most fascinating developments in recent classical-music history with regard to what it tells us about classical-music continuity. The story has become almost as standardized as any superhero origin: Price, the first Black woman to have her work performed by a major orchestra, was stymied over the remainder of her career, and fell into semi-obscurity, until the chance discovery of a cache of her musical manuscripts revived interest in her and her music. That trope—a forgotten composer, re-discovered—is, as many commentators have pointed out, a convenient fiction, eliding the particulars of who exactly forgot Florence Price, and why. But the story is crucial with respect to Price’s entry into some version of the classical canon. It provides a way of situating Price in the existing classical-music continuity. Her music was neither old nor new, stylistically and formally in the Romantic/nationalist vein that so much of the classical canon epitomizes, and yet somehow (within the trope, at any rate) disconnected from the time and circumstances of its creation—oneiric, in Eco’s sense—in a way that makes it easier, not harder, to edit her back into the timeline, reinterpreting the continuity in a more optimistic and generous way.
Listening to her, I have the uncanny sense of hearing the symphonies and operas that women and African-Americans were all but barred from writing during the Romantic heyday, when the busts on the piano were being carved. She seems to speak from an imaginary past, from an alternative history of an America that lived up to its stated ideals
At a certain point Supergirl appears on the scene. She is Superman’s cousin, and she, too, escaped from the destruction of Krypton. All of the events concerning Superman are retold in one way or another in order to account for the presence of this new character (… the narrator goes back in time to tell in how many and in which cases she, of whom nothing was said, participated during those many adventures where we saw Superman alone involved).
Classical music might be defined by its relationship to the past, but the parameters of the relationship—and the characters in the narrative—are at least somewhat malleable.
This is, to be sure, incremental change. Price’s music has gained a foothold because, as much as it disrupts classical-music continuity, it reinforces it, too. The necessity of being able to elide with that continuity is its own form of exclusion. But it hints, I think, at ways to start to topple the discriminatory pillars holding up much of the classical-music apparatus without sacrificing that sense of continuity.
Eventually, though, some continuities need come to a conclusion. It’s important to note that Eco’s ultimate rationale for unpacking the Superman mythos was political—to explain why, for all his power, Superman never dismantled the underlying structure of the crime-ridden society he patrolled so diligently:
It is strange that Superman, devoting himself to good deeds, spends enormous amounts of energy organizing benefit performances in order to collect money for orphans and indigents. The paradoxical waste of means (the same energy could be employed to produce directly riches or to modify radically larger situations) never ceases to astound the reader who sees Superman forever employed in parochial performances. As evil assumes only the form of an offense to private property, good is represented only as charity. This simple equivalent is sufficient to characterize Superman’s moral world. In fact, we realize that Superman is obliged to continue his activities in the sphere of small and infinitesimal modifications of the immediately visible for the same motives noted in regard to the static nature of his plots: each general modification would draw the world, and Superman with it, toward final consumption.
Fair warning: continuities are hardy beasts. In the mid-1980s, DC, the publishers of Superman, tried to do a wholesale clean-up of their continuity with a famous limited series called Crisis on Infinite Earths, an attempt to collapse the multiple universes that had sprung up in DC comics as a result of proliferating continuity. It didn’t stick. A pair of early 2000s sequels partially undid that narrative, and then, in 2015, a limited series called Convergence finished the job:
I have been doing my part to uphold the classical-music hegemony by practicing Johann Sebastian Bach under lockdown—practically a cliché at this point, but when has that ever stopped me before? My repertoire of choice has been the A minor English Suite (BWV 807), a set that, if not a white whale for me, was at least a pale fish, my fingers and brain never reconciling to it despite my fondness for the music. It took the experience of the pandemic—the disruption, the isolation, the forced change—in order for me to realize why. The Bach works that have always come easiest to me are the ones that establish a contrapuntal or physical pattern and then spin it out across time. I suppose they’re the sort of works that feed the popular image of Bach as some sort of divine clockmaker, every note falling into its place with effortless inexorability.
The A minor Suite is not one of those. Playing through it, you can feel Bach having to work the material every step of the way, trimming it, pinning it, patching it in order to get the tailoring right. The patterns stutter or abruptly change direction; a fingering that works for one iteration fails in the next.
I’ve become especially engrossed by the opening to the Allemande, which seems to stumble around for several bars before the music actually settles on what it’s going to be—a moment of rhetorical vertigo promptly repeated, as the form demands.
BWV 807 is Bach finding himself doing the musical equivalent of what we’re all doing right now: navigating while off balance, adjusting on the fly, muddling through. That he still manages to land every phrase, every paragraph with sureness and even grace is a measure of his expertise.
I’d seen a lot of pablum lately about how musicians turn to Bach in times of difficulty because his music is orderly (it is not) or rational (it is not) or that it channels some sort of divine reassurance (your mileage may vary, but, for me, no). The worth I find in Bach at the moment is in diametric opposition to the cavalcade of incompetence that forestalled any chance of an effective response to the virus: the uninformed disparagement of science, the cowardly capitulation to corporate greed, the hot-potato passing of moral and administrative responsibility among elected officials, the pointless and mendacious politicization of even the most basic preventative measures. In the face of that, Bach’s expertise is a real comfort. As escapism goes, there is something to be said for stealing away from the news every now and then and spending an hour or two in the company of someone who was very, very good at his job.
Like many piano players, I’ve enjoyed fooling around with Kapustin’s music over the years, although my fingers couldn’t always keep pace with his. Ethan Iverson found this footage of Kapustin performing his op. 8 Toccata with Oleg Lundstrem and his orchestra.https://www.youtube-nocookie.com/embed/UyepVRxrk9M?rel=0&autoplay=0&showinfo=0
(The clip comes from the 1964 film Когда песня не кончается—Kogda pesnya ne konchayetsya, “When the Song Doesn’t End”—a wild, encyclopedic survey of Soviet variety entertainment at the tail end of the Khrushchev era.) It’s a good illustration of Kapustin’s pianism—and also his largely unchanging style. A hundred opus numbers later, Kapustin was still exploring this particular vein: intricately virtuosic jazzy showpieces.
Jazzy, but not jazz. Kapustin always insisted he was a composer, not jazz musician, on the grounds that he preferred writing down and refining his ideas to the improvisation that he considered to be the quintessence of jazz. His rhythm, too, borrowed the vocabulary if not the syntax; much of the fun of Kapustin’s music is the way it doesn’t settle into a groove, but rather chases grooves over hill and dale like a Warner Brothers cartoon. His music constantly deconstructed and rearranged the subdivisions of a classically-notated rhythmic grid.
There’s a quirk of notation that’s always fascinated me. When a phrase of triplet swing ends on an accented off-beat, you will sometimes see a composer switch from dotted rhythms or triplets to a pair of straight eighth notes. Kapustin used this notation a lot. Here’s an example (which also tries to massage the difference between a triple and quadruple subdivision) from his op. 41 Variations:
Someday (which is to say, when I can nose around a library again) I’ll trace the origin of this notation. I first encountered it in scores by Leonard Bernstein (it’s all over Trouble in Tahiti, for instance). It’s characteristic, I think, of composers who try to translate the feel and subtleties of jazz rhythms for players who aren’t necessarily familiar with them. The notation is only partially correct: in that situation, the rhythm is not going to be another triplet quarter-note-eighth-note combination. But it’s not quite straight eighths, either. It’s a considered approximation, like a phonetic transcription in a traveler’s phrasebook. That was Kapustin’s single and enduring specialty: translating jazz into an argot that would work within the stick-to-the-score, on-top-of-the-beat practice of classical music.
Ennio Morricone, on the other hand, was a polyglot. How do you sum up someone so kaleidoscopic? In 1971 alone, he was credited with the scores to twenty-one different films, including Sergio Leone’s final western, Duck, You Sucker!, Pasolini’s bawdy Decameron, Elio Petri’s agitprop drama The Working Class Goes to Heaven, the featherlight cool of Henri Verneuil’s heist film The Burglars, a pair of Dario Argento shockers (The Cat o’ Nine Tails and Four Flies on Grey Velvet), and Giuliano Montaldo’s docudrama Sacco and Vanzetti—the latter a collaboration with Joan Baez. I mention these films because I managed to see them at various points in my life, and I neither remembered nor would have guessed that Morricone scored them all. In the Washington Post, Ann Hornaday nominated Morricone’s score to Cinema Paradiso as an unintended but apt crystallization of the current moment: yearning for the affirming connection of a communal experience. The thing is, given the breadth of Morricone’s output, one could hear any number of his scores as topical, depending on one’s relative optimism or pessimism. (As I tend toward the latter, I instead might opt for Morricone’s work on, say, Leone’s Once Upon a Time in America or Gillo Pontecorvo’s one-two punch of The Battle of Algiers and Burn!)
The one common thread in all of it, I think, was Morricone’s emphasis on timbre. So many of his scores orbit a particular sound, a trait one can hear especially clearly when that sound is unusual or unusually prominent—the oboe in The Mission, the panpipes in Once Upon a Time in America and Casualties of War, the whistling and guitars that underpinned A Fistful of Dollars and, to Morricone’s chagrin, became a shorthand for the entire spaghetti-western genre (and, in less incisive obituaries, Morricone’s entire career). He would find a single sound that could emotionally unlock or sum up a film, and orient the score around that sound.
The best insight into Morricone’s habits and style (or lack thereof) might be the music he made as part of the Gruppo di Improvvisazione Nuova Consonanza, the very far-out free-improvisation ensemble started by avant-garde composer Franco Evangelisti in the 1960s. The freewheeling style of Il Gruppo is a long way from, for instance, the nostalgic polish of Cinema Paradiso, but all of Morricone’s traits are there: the exploration of sound, the purposeful eclecticism, the cut-to-the-chase immediacy. Morricone’s work with Il Gruppo was an ideally recursive exercise for a composer of his versatility and prolificacy. It was an outlet for his omnivorous, fertile musical imagination; at the same time, it renewed and reinforced perhaps his greatest strengths: the confidence to trust his initial instinct, and the toolbox to exploit it.
So this is the first blog post in [checks date] eight-and-a-half weeks, huh. I’m still more timely than Perspectives of New Music! I kid, I kid.
But, no, I haven’t been doing much music writing while under lockdown. Still, the death of Little Richard last week was enough to make me carve out some time from my home-schooling duties, put together a reflection, and pitch it around a bit. There were no takers, not surprisingly. (The market for a post-Boomer white classical music critic’s thoughts on Little Richard is not exactly robust.) But I’ve spent a lot of time with Little Richard over the years, with his music and his biography.
Little Richard was a key focus of the early research for my forever-in-progress, forever-unfinished history of music in the 1950s, in large part for what his music revealed about concepts of space and territory. That was Little Richard’s power: collapsing the space separating you, the listener, from rock-and-roll’s exhilarating and scandalous energy. One of the startling things about 1955 thesis statement, “Tutti Frutti,” is how close-up it feels. The original recording, made in the tiny, back-of-the-store J&M Recording Studio in New Orleans, was exceptionally and noticeably dry, with hardly any ambience at all. All that was there was the performance, incandescent and direct.
That dry, right-next-to-you sound was not necessarily the norm, even in rock-and-roll. Others tried to make sure you kept your distance. “Tutti Frutti” was, infamously, covered by Pat Boone, an early entry in the catalog of whitewashed R&B covers that, for a brief time, clogged and distorted the airwaves and the charts. In the cover version, Boone’s voice was draped in a halo of reverb. It created a concert-hall-like aura, an expansion of the space into which the sound seems to be projected, a decorous gap between the audience and the stage—not to mention the song’s racial and sexual charge. When you listen to Boone’s version, you’re forced to enter into that simulated space. When you listen to the original, Little Richard comes into the space in which the recording is being played—your car, your living room, your bedroom. He and the song are right there, close enough to touch.
The dividing line between rhythm-and-blues and rock-and-roll might well be Little Richard’s undefinable but unmistakeable immediacy: the animation, the abandon, the scream. His influence was, paradoxically, both instantaneous and deferred. The year after “Tutti Frutti,” Little Richard was a featured player in director Frank Tashlin’s 1956 romp The Girl Can’t Help It. It’s still one of the best rock-and-roll movies ever made, and one of the most incisive: both the film and its most flamboyant performer amplify rock’s sensual, silly, and cynical qualities in ways most practitioners and fans would not come to terms with for years. Little Richard is in on it from the start. Take the opening scene: Jayne Mansfield, shoehorned into a low-cut, deep-blue dress, sashaying down a New York sidewalk, oblivious to her effect on men and matter—melting ice, erupting milk bottles. It’s a Tex Avery cartoon come to life, an over-the-top release of post-World War II psychosexual repression. The soundtrack is the title song, performed by—who else?—Little Richard.
One of my foundational texts for deciphering rock-and-roll was Nik Cohn’s Awopbopaloobop Alopbamboom, written in 1968 and revised in 1972. In stealing Little Richard’s indelible interjection for his title, Cohn neatly epitomized the oft-mythologized sense of rock-and-roll’s detonative impact. Generations of rock-and-roll adherents, from the British Invasion to the punk rock and its offspring, venerated Little Richard’s stripped-down efficiency, while wave after wave of rock-and-roll heretics—glam rock, disco, hair metal—embraced his heightened flamboyance. But Cohn’s book has an elegiac streak, a realization, perhaps, that Little Richard’s transcendent combination of virtuosity, defiance, confrontation, glamour, camp, and utter individual freedom was an unrepeatable phenomenon.
Some artists embody their era. Little Richard excavated and revealed his. Every tendency that white, respectable American society would have rather neutralized, Little Richard brought to the fore and epitomized. As much of the country was reinstating and reasserting reactionary gender roles, Little Richard flaunted a queer identity. While religious leaders refashioned churches into suburban temples of middle-class propriety and prosperity, Little Richard hollered with Pentecostal zeal, in tongue-twisting lyrics that verged on glossolalia. As the country’s history of racial conflict once again came to an inflection point, Little Richard stood black, loud, and proud, joyously dangerous and dangerously joyous, preaching integration through rhythm, a precise and pervasive piano backbeat embracing and containing multitudes.
Even when the power seemed to frighten Little Richard himself, and he would turn back to the church, you could still feel the pull of both forces. His 1962 album, titled (with characteristically cocksure piety) “The King of the Gospel Singers,” showed him at his most restrained and accomplished, but the old exuberance still emerged, most recognizably in the spiritual “Ride On, King Jesus,” soaring again and again into the stratosphere on the line “no man can hinder me.”
And, sure enough, Little Richard would always come roaring back, contradictory, unapologetic, uninhibited. He put it best on his 1970 single “Freedom Blues,” expressing, in the face of social and political upheaval and inequality, his creed and salvation: “I got my duty,” he sang, “rock-and-roll.” You couldn’t keep Little Richard from his calling, any more than you could keep lightning from flashing.
As noted on Twitter, our daughter has decided that our lockdown soundtrack should be Broadway musicals. For the first five weeks, the musical was She Loves Me, with its lovely, operetta-ish, Budapest-by-way-of-Times-Square score by Jerry Bock and Sheldon Harnick. Listening to the various soundtracks of the show—the 1963 original, this 1978 BBC version, the 1993 Broadway revival, the 1994 London cast, and the 2016 revival that is our daughter’s favorite—you can appreciate both the evolution and the importance of tempo for some shows. The 1963 production She Loves Me was not successful; the 2016 show, in which the songs are, across the board, performed considerably faster than in 1963, was a hit.
More recently, the musical-of-choice has become Frank Loesser’s Guys and Dolls, which our daughter prefers to watch in 1955 film version directed by Joseph L. Mankiewicz, with Frank Sinatra as Nathan Detroit and Marlon Brando as Sky Masterson. This confirms that she is my offspring, having apparently inherited my undying affection for fascinatingly flawed works of art. Because, holy cow, is Guys and Dolls a weird movie. (Even the trailer is weird.) The artifice is in-your-face: six years after Gene Kelly and Stanley Donen shot parts of On the Town on location in New York City, Mankiewicz encases everybody in soundstage sets that are very obviously soundstage sets. The acting styles are all over the map, from the highly-stylized work done by the holdovers from the stage show to Sinatra’s less-is-theoretically-more studio shrugging to Brando and Jean Simmons’s subtly but intensely physical performances. The contrast between Sinatra and Brando is particularly sharp. Sinatra does some of his best-ever singing while never really doing anything but playing himself. (Loesser couldn’t stand him as Nathan Detroit.) Brando can’t sing, but, otherwise, this is some of the most elegant acting he ever did—every gesture, every posture considered, not from the standpoint of an actor, but from the standpoint of a character who prizes and depends upon a particular attitude.
Brando had a talent for sly parody, playing off a concept or a character in ways that could be deeply funny. The most famous instance of this was The Freshman, in which the original Vito Corleone offered an extended, surreal lampoon of Brando-as-Godfather that put all other impressions in the shade. I’ve started to think of Guys and Dolls in the same way. Brando had previously worked with Mankiewicz on Julius Caesar, and, the more I watch it (and, the way my daughter’s obsessions work, I’ve been watching it a lot), the more I’m convinced that Brando’s Sky Masterson is a long and impish riff on his own performance as Mark Antony. Long story short: there are worse ways to spend two-and-a-half hours avoiding the mundane parade of horror and idiocy outside your door than to watch a young Marlon Brando dance his way through a role that, among other things, triangulates the craft, celebrity, and foolishness of acting itself.
When the plague hit Milan in 1576, the aristocrats and the governor fled. The bishop, Charles Borromeo, stayed. He made his will, picked out a place in the cathedral for his tomb, and went out to minister to the sick. As both Milan’s most important remaining civic leader and its spiritual guide, Borromeo’s efforts were sometimes at odds. As a public health official, he did much that was prudent, instituting strict protocols for the lay clergy who distributed communion to the afflicted, holding audiences from behind a screen, and, according to his 17th-century hagiographer Giovanni Pietro Giussano, “had, when he left the house, a wand carried in front of him, to keep those in the contagion’s snare away from himself and his assistants.”
But, in 1576, everyone knew that plagues were punishments from God for collective sin, and the atonement was public: large processions of people from all across the city, joining in a parade of penitence. The model was the procession organized and led by St. Gregory the Great in response to the plague that infected Rome in 590; eighty people collapsed along the way, but, at the end, the archangel Michael appeared on top of Hadrian’s Mausoleum and was seen to put his flaming sword back in its scabbard—God’s wrath had been appeased. Borromeo dutifully followed suit, though he did his best to limit the impact, having believers march only with members of their own parish. Eventually, though, Borromeo bowed to the necessity of social distancing. He set up altars in the street so the people could hear mass without leaving their houses. And, having distributed pamphlets of songs and devotions for the processions, he now advocated home use of those as well. Seven times a day, the cathedral bell rang, and the residents of Milan would come to their doors and windows and sing the litanies: “this great city, numbering three hundred thousand souls,” Giussano recorded, “praising God at the same time from all sides… infinite voices resounding and echoing, calling all heaven to help in that court of misery.”
Giussano did not, however, record what happened to the church choir section leaders.
Lists of resources for suddenly-bereft freelance musicians and performers in the wake of COVID-19 are starting to show up. This one, by Hannah Fenlon, Ann Marie Lonsdale, and Abigail Vega, is the most comprehensive (and ecumenical) one I’ve seen, but even that’s incomplete. The American Composers’ Forum has a list more targeted to new-music people, for example. A quick look around GoFundMe finds dozens of recently-started (so, caveat emptor, obviously), localized funds—here’s one for DC and Maryland arts freelancers. After throwing some money in a few directions, I’m tempted to save and screenshot the online evidence of this scattershot, ad hoc collection as damning evidence of musical life under late capitalism.
My first recession as a freelancer was 1990 and its dawdling recovery, which gave me an erroneous sense of the relationship between a musical career and the market: work was precarious but just viable during the recession, and precarious but just viable after it, and I thought that musical employment was, in a weird way, shielded from the cyclicality of capitalism by virtue of its own marginality. I was young and dumb! (Believe me, old and dumb is so much more fun.) But even as that early fantasy gradually dissolved, I still figured that capitalism would plod on, girded by its own instinct toward self-preservation. What the pandemic has made clear is how readily the system will feed on itself for even the most fleeting profit. Suicide is an arbitrage opportunity, apparently.
(Anyone who wants to appropriate “Suicide is an Arbitrage Opportunity” as the title of your next anarcho-punk song/album, be my guest.)
A preview for a production that was, along with so many others, canceled. It’s currently scheduled to be part of Mostly Mozart at the end of July; go see it if you can. It won’t be rescheduled in DC until 2021-22 at the earliest, I’m guessing, but I hope it’s staged here sooner than later.
These sorts of interview-heavy features can sometimes be a slog to pull together, but this one was a dream: I don’t think I’ve ever left so much marvelous dialogue on the cutting room floor. (I even got to eavesdrop on a rehearsal!) A lot of dedicated people are taking a financial hit because of the cancellation. Forwarding some coffee money to AGMA would be a nice idea.
This newsletter is also a little bit of a preview: the next Score column for the Boston Globe will consider Marc-Antoine Charpentier’s Pestis Mediolanensis, a motet-slash-oratorio about Charles Borromeo and the 1576 Milan plague.
I’m not going to stumble through that much 17th-century Italian source material and not get as much mileage out of it as I can. Stay well, everybody. See you (virtually) at the colloquium.
For a soundtrack to this post, I was going to link to my favorite British glam-rock-spawned song with the title “Fever of Love,” only to discover that there’s also another British glam-rock-spawned song with the title “Fever of Love.” The research, it never stops.
I was supposed to be in New York City this Friday, giving a pre-concert talk ahead of the opening concert of the Philadelphia Orchestra’s Beethoven cycle at Carnegie Hall. But, instead, thanks to COVID-19, I will be at home. (We’re fine! But my wife’s workplace has instituted travel restrictions such that, if I make the trip, she would have to self-quarantine for 14 days, which is neither fair to her nor, frankly, practical for any of us.) I had been looking forward to this for months. It’s been a depressingly long time since I’ve been able to have a parley with a New York audience.
Apologies, NYC! Be smart, take care of each other, and, hopefully, I can come back at some point, and you can hear how Beethoven’s 5th and 6th symphonies are like Roxy Music’s first two albums.
(Given that the severity of the outbreak in the US is in large part due to the fact that the federal response is currently being dictated by the whims of a spoiled tub of Miracle Whip, please vote Democratic this November? Even if the Democrats nominate a broken 8-track cartridge of Donny and Marie songs? Which, to be fair, they might.
I should note that one of the earliest and enduring treasures of my record collection is an original 1976 copy of Donny & Marie: Featuring Songs from Their Television Show, which is an amazing piece of vinyl in every way. Take, as but a single example, the final track, their usual sign-off, “May Tomorrow Be a Perfect Day.” While this studio version, sadly, lacks the prominent wah-wah disco guitar that sometimes turned up on the broadcast, it does feature, in the Vegas-style big band playout, an uncredited sax player dropping in a sassy little four-bar solo. And then, on the next go-round of the chorus, at the exact same spot, we hear the exact. Same. Solo.
Take that, Steve Reich! When I was a DePaul undergrad, my roommate and I had a transcription of this solo taped to our refrigerator.)
This caught my eye. It popped up on the blog of Elizabeth de Brito’s online radio show The Daffodil Perspective, in an interview with a couple of people who don’t listen to classical music, about what they felt to be barriers to new listeners:
What do you think about the language used to describe classical music?
Anton: It feels like being back in school exams (in school asking about tempos, key signatures. It’s not simply the language itself but the fact that description is an intrinsic part of the music. It’s like there’s a prerequisite of knowledge. It’s not like it’s not possible to learn about the basics but having to do so is like putting a restriction on it.
Nina: It’s so different, seems like it’s splitting itself from other genres, using a different vocabulary, often to describe the same things, like songs are arias, lyrics are libretto. Needing to look something up all the time just puts a barrier to understanding the music.
I very much am of two minds about this passage! I don’t want people to feel like they’re being policed out of a fandom by language. But language is part of what genre is. As a practitioner, critic, and fan, I’ve made my way through dozens of musical genres and sub-genres, and every one has its own way of talking about music: a distinct terminology, a distinct vocabulary signaling what’s considered good and what’s considered bad, a distinct corpus of common-ground artists and repertoire. I mean, compare this paragraph from Matthew Ismael Ruiz’s Pitchfork review of the new Bad Bunny album:
The highlights are plentiful; early singles “Vete” and “Ignorantes” occupy the suave sadboi lane he’s best known for, but “Yo Perreo Sola” and “Bichiyal” rock raw, stripped-down reggaetón beats evocative of the genre’s “Gasolina” era. And he doesn’t completely abandon the sounds of the trap, either: The Anuel AA collab “Está Cabrón Ser Yo” could just have easily found itself on the Migos’ Culture III.
Do I know what all that means? No. Do I want to know? Yes. Maybe it’s because I spend a good portion of every day dealing with words, but I don’t feel like this review is throwing up a barrier; I feel like it’s giving me the tools to start finding my own way around the music and the scene, if I’m curious enough to want to do that. Everybody has their own favorite examples of impenetrable and/or purple writing about classical music. (Heck, I have my own favorite examples of my impenetrable and/or purple writing about classical music.) But that’s not a vocabulary problem, that’s a bad writing problem. Concert presentation and format still fail newcomers too much of the time, and how and where jargon and terminology are used is a part of that. But being welcoming is not the same thing as preemptively providing answers to every last mystery. Nobody holds on to frictionless art.
What Frank [Lowe]’s music reminded me a little bit was Ornette Coleman. Thanks to Billy Higgins, I spent hours and hours listening to the early rehearsals of Ornette’s music with Don Cherry and Charlie Haden. I was right there when Ornette was changing his style…. They rehearsed at Billy’s house. I was there a lot.
God, I love watching rehearsals. One of the very best things about being at Tanglewood was taking a long lunch hour and eavesdropping on ensembles of all shapes and sizes rehearsing. I wish more performers and groups would do open rehearsals—and I wish more open rehearsals were actual rehearsals, rather than just polished run-throughs. I learned a ton from watching other people practice. I also found it totally captivating.
The cult of being note-perfect all the time is probably why more musicians don’t let spectators in on at least some of their rehearsals. A public presence means pressure to do a good job. But what about a virtual audience? I would watch a live-streamed rehearsal. There’s an opportunity there! Honestly, the way things are going, we might not be going out much for a while.
N.B.: the Baltimore review marks my first published correction in the Post. It was a good run while it lasted.
The Lewis/Osborne recital had me thinking about technology—specifically, the weird development of piano technology. The piano evolved pretty continuously from its early-18th-century origins: wood frames became iron frames, straight-stringing gave way to (mostly) cross-stringing, the key-hammer action went through a bunch of adjustments and improvements, pedals moved from knees to feet, &c., &c. And then, sometime toward the end of the 1800s, everybody decided that the piano had reached more or less its final form. Which is not to say that the piano is a perfect instrument; a large part of piano training, in fact, is mastering techniques for overcoming its quirks. Which maybe was been part of the point. This, the entire late-Romantic piano-playing culture seemed to say, this is the appropriate level of difficulty that every future pianist should be required to overcome. Since then, it’s training and performance that has been the locus of development. Piano writing and piano playing in the 21st century is far beyond what it was 150 years ago! But I wonder how much more innovation can be squeezed out of the piano on the technique end.
One of the things I’ve been fascinated by in 21st-century pop music is the wholesale and unapologetic embrace of technological development. If a sound or a passage is unsuitable, or difficult, or impossible for a standard instrument, the sound or the passage is realized through technological augmentation or substitute, with nobody—performers, producers, listeners—blinking an eye. Here’s a theory: when people talk about musical traditions or styles having or not having “relevance,” maybe a lot of the time what they really mean is that the favored technology of a musical tradition or style is obsolete. “Relevance” isn’t cultural, but an expression of a style’s penchant for embracing (or, if you like, fetishizing) a technological cutting edge. That applies to distribution, too—when I hear someone citing a supposed golden age of classical-music currency and cultural status in the mid-20th-century, I wonder how much of that is simply acknowledging the heyday of the LP as a shiny new format.
(A few years ago now, I started to unpack some ideas related to this for NewMusicBox. Then life got in the way! I should pick up that thread again someday.)
I spent part of today at the Library of Congress, doing some due diligence in the archives of my old teacher, Lukas Foss, for an imminent column. There wasn’t much related to the column, but I did find a couple of birthday cards. There was this one, a 40th-birthday greeting to Foss from Witold Lutosławski:
(The footnote for the non-existent Ondes Martenot part: “The author does not love this instrument”.)
And then there was a draft for a card from Foss himself to (I’m guessing) Michael Tilson Thomas:
Empires rise and fall, but we’ll always have composers goofing off.