Tuesday, November 17, 2009

The Conductor Reflects


The Conductor Reflects
By DAVID MERMELSTEIN
from wsj.com


Few conductors can effortlessly invoke the Scriptures, Shakespeare, Goethe and Joseph Campbell in a preperformance lecture. None can do it as unpretentiously as David Robertson. Now in his fifth season as music director of the Saint Louis Symphony Orchestra, the American-born Mr. Robertson has raised to new heights the standing of this venerable, though not always lauded, ensemble. The relationship has also brought him belated renown in his homeland, well after he found fame in Europe, most notably with the Ensemble Intercontemporain in Paris.

Wednesday night, Mr. Robertson and his orchestra return to New York's Carnegie Hall, continuing an annual tradition, this time as part of the hall's "Ancient Paths, Modern Voices" festival, which juxtaposes music by Chinese composers with Western scores inspired by the Middle Kingdom. The program pairs recent works by the Chinese-born composers Tan Dun and Bright Sheng with Stravinsky's "Song of the Nightingale" and Bartók's "Miraculous Mandarin" Suite.

Mr. Robertson, age 51, was widely reported to have been a leading candidate for the music directorships of the New York Philharmonic and the Chicago Symphony Orchestra recently, and many have wondered how long he will remain in St. Louis. His annually renewed contract, currently valid through the 2011-12 season, is more typical of music directors nearing the end of their tenures. But he maintains that such details lend no insights into his commitment.

"I've never looked at anything I do as a stepping stone to someplace else," Mr. Robertson said while comfortably settled on a sofa in his bright, homey office at Powell Hall, a converted movie palace that has been the orchestra's home since 1968. "When you commit to a relationship, it's because the relationship makes both of you better."

Though the SLSO is the second-oldest such ensemble in America (the New York Philharmonic is the oldest), it has had more than its share of troubles recently—including a near bankruptcy in 2001, the sudden incapacitation of its previous music director in 2002, a strike by musicians in 2005 and, most recently, a sharp decline in the value of an endowment established to help stabilize the orchestra.

Mr. Robertson, though, decided that other things mattered more, noting that he found the organization unusually committed to civic concerns—a spirit that echoed the priorities of the European groups with which he had forged his reputation. "They give close to 300 community concerts versus 75 in Powell Hall," he said of his players. "And they didn't cut those when they had problems, which said to me this orchestra has its heart in the right place."

The conductor consistently underplays his artistic achievements here, preferring to credit his musicians with their mutual accomplishments. "I do like a challenge," he said. "But I like challenges that I know are attainable. The greatness in this orchestra is right there, so I didn't need to build it. They don't bring their egos onto the stage, just their creativity. And that's a huge, huge plus."

He likens his own duties to that of a "complex mirror" and suggests that musical inspiration comes from three sources: the composer, the musicians and the audience. "My job," he said, "is to reflect and refract these different beams of inspiration to the various parties in the right proportions."

And thanks to a new recording on Nonesuch of music by John Adams—the orchestra's first CD for a major label since the mid-1990s—Mr. Robertson's achievements in St. Louis can now be heard beyond the concert hall. The album features two works, "Guide to Strange Places" and "Doctor Atomic Symphony," the latter a reworking of themes from Mr. Adams's most recent opera. Both were recorded live at Powell Hall last year, giving listeners an especially good sense of how this ensemble sounds under its current music director.

"You're hearing the way this orchestra plays on the concert platform," the conductor said of the recording. "It's just like I imagine the score in my head, with the same passion and excitement, but also with individual personalities expressed. It's like recognizing someone's voice on the telephone. That was how it used to be when the orchestra was making lots of recordings, and it's nice to have that quality again with so many new players."

In April, Mr. Robertson takes the SLSO on a four-city California tour, with stops in Los Angeles and San Francisco. The experience will be a homecoming of sorts for the conductor, who was born and schooled in Santa Monica. "You have something nice, you want to share it," he said, before mentioning St. Louis's most famous symbol. "I can't take the arch on tour, so I'll take the Saint Louis Symphony Orchestra, which is equally wonderful."

The pairing prompts another comment, one that cuts to the heart of Midwestern self-deprecation and, perhaps, an inferiority complex. "You know," Mr. Robertson continued, "it takes somebody from outside St. Louis to come and say, 'This arch is one of the most inspiring objects in the world.' Only then do people from here say, 'Yeah, so it is.' And it's the same with the symphony. Only after outsiders praise it do people here go, 'Oh, yeah, so it is.' And that's part of our challenge—to get people here to realize just how amazing this thing is that's right in their midst."

—Mr. Mermelstein writes for the Journal on classical music and film.

Sunday, May 10, 2009

Snark Attack: The New Yorker's David Denby campaigns against "low, teasing, snide, condescending" criticism


by Michael C. Moynihan | April 6, 2009

Snark: It’s Mean, It’s Personal, and It’s Ruining Our Conversation, by David Denby, New York: Simon and Shuster, 144 pages, $15.95

Not long ago, New Yorker film critic David Denby had an epiphany: American culture was being debased by “snark,” that “low, teasing, snide, condescending, knowing” style of criticism, a “bad kind of invective” that’s “spreading like pinkeye through the national conversation” and proliferating on the Internet. Denby received this revelation while enjoying a “pan-Pacific dinner” with the political journalist Michael Kinsley. “Somewhere between the Singing Fish Satay and the Pow Wok Lamb,” he writes, “Mike and I...said more or less the same thing—that snark was becoming the characteristic discourse of our time.”

The byproduct of this conversation is a pungent and angry little book called Snark: It’s Mean, It’s Personal, and It’s Ruining Our Conversation. In just over 100 pages, alongside the first-name references to his famous friends and descriptions of his high-class meals, Denby attacks the online boobeoise who, he argues, have altered the tone of debate by supplanting thoughtful conversation with snide and indiscriminant denunciations of the “douchebags” with whom they disagree. “In a media society,” he writes, “snark is an easy way of seeming smart.” If the bloggers at Gawker and Wonkette, two websites dedicated to all things snarky, delight in puncturing the pretentions of the old-guard bourgeois intelligentsia, Denby has provided them a slow-moving target.

In a condensed history of snark, Denby relies on odd examples from the distant past—a pointless diversion, for instance, into Lewis Carroll’s nonsense poem “The Hunting of the Snark,” which has nothing to do with snark in its current meaning—and a fevered denunciation of various celebrity gossip websites and presidential campaign ads. While he rightfully credits the British satire magazine Private Eye and its American progeny Spy as snark trailblazers, he omits mention of Grand Royale, Suck, and Vice, all far more influential in establishing the tone of modern Internet snark.

It’s likely that those publications are unfamiliar to Denby, and his brief backgrounder on snark’s roots seems perfunctory—little more than a way to pad an essay into a small book that meanders towards the targets that really outrage him. For an idea of just what motivated Denby to attack an ephemeral style like snark, search for his name at Gawker, a media gossip site. Read the stories there about Denby’s “pornography addiction,” which he chronicled in his book Suckered, and the declaration that “we [have] come to hate David Denby.” For a great majority of Denby’s years as a professional writer, he was effectively firewalled from his critics. In the Age of the Internet, hipster bloggers are baying for the fusty critic’s blood.

Denby wants things as they once were, when American culture was effectively a dictatorship of the bourgeoisie; when the Ivy League guardians of “our conversation” ruthlessly protected it from contamination by the jealous and uncouth. “Whatever its miseries, the country in the thirties and forties was at peace with itself spiritually: We were all in the same boat,” he argues. Today we have “income inequalities and Rovian tactics that exacerbate ethnic and class differences”; then we merely had Nazism and the Depression.

It seems unnecessary to observe that in the 1930s, when unemployment was in double digits and Father Coughlin commanded a rather large radio audience, both poverty and dirty politics were not entirely uncommon. And long before the Internet existed, such lurid and sleazy magazines as Police Gazette, Confidential, and Broadway Brevities sold millions of copies a week.

It’s just that the readers couldn’t get at you.

There is nothing new in the use of brutal sarcasm and ad hominems to attack your enemies. What Denby laments is the way technology has empowered the snarky critic to take shots at the powerful and influential, allowing the democratization of published cruelty. As Denby writes, snark is “the weapon of outsiders who want to displace the insiders.” True enough, but the reader can only wonder why a film critic at The New Yorker is troubled by nugatory attempts of “snarky pipsqueaks,” as he calls them, to challenge the professional critics.

Anyone who has been exposed to the subliterate animosities and grudges of the cruder anonymous commenters or bloggers, or has bristled at the lowered bar of what passes as clever satire on snark-heavy websites, will have some sympathy for Denby's effort to attack against the “everyone-sucks-but-me” culture. But his bizarre choice of targets and imprecise definition of “snarky” derails his argument from the beginning. At its core, Snark is a deeply political book and, therefore, Denby offers special dispensations for a Right On!–variety of ideological snark. “Snark is irresistible,” he writes, when discussing our previous president (and who could disagree with that?), but it apparently becomes gauche when directed at Democrats peddling hope and change. A large chunk of his argument is ceded to score-settling and a post-election outpouring of anger against those who said impolite things about Hillary Clinton and Barack Obama. (Denby may be the only writer alive who would describe Sarah Palin's description of Barack Obama as “palling around with terrorists” as snarky.)

Denby tags the Fox News screamer Bill O’Reilly as a boorish knuckle-dragger, but his liberal counterpart Keith Olbermann is something else entirely: “One can’t help but noticing...that Olbermann’s tirades are voluminously factual, astoundingly syntactical...and always logically organized.” The leftist writer Gore Vidal is a “master of high snark,” while his conservative counterpart Tom Wolfe is an overrated racist. If you agree with the snark, it probably isn’t snark.

Denby identifies Wolfe’s “Radical Chic” as a progenitor of today’s snarky style, but it fails, he says, because the writer’s teasing of haute-liberal infatuation with the Black Panthers “now seems more fatuous than the assembled partygoers.” How so? Because according to Denby, “In the end, [Wolfe’s trademark] white suit may have been less an ironic joke than the heraldic uniform of a man born in Richmond, Virginia, who entertained fancies of a distinguished Old South in which blacks kept their mouths shut, a conservative who had never accustomed himself to the new money in the Northeast.” While denouncing bloggers for rumor-mongering and for besmirching reputations with nothing but conjecture, Denby nevertheless finds it appropriate to imply that Wolfe’s writing is steeped in white supremacy.

Denby accuses many of his targets of employing racist language in the service of snark, but often draws the wrong conclusions from his provided anecdotes. On the anonymous Internet, socially taboo topics like race become topics of humor, motivated both by racist belief and an attempt at finding the subversive in the forbidden. To the captains of snark, like those who produce Vice and The Onion—whose readers, incidentally, skew heavily into the Obama-voter demographic—racially-tinged jokes concurrently poke fun at the idiocies of the racism and the restrictions of the P.C. culture in which they were raised.

On top of the boorish jokes, Denby argues, it is also problematic that those reckless bloggers and snarky columnists don’t act like real journalists, don’t make phone calls to verify details about those they attack, and “ignore the routine responsibilities of journalism.” (Incidentally, in the paperback edition, Denby might note that Wonkette is not “written by young women” and is not owned by Gawker Media.) To Denby there is no separation between humorous commentary and journalism.

In a short, bitter denunciation of New York Times columnist Maureen Dowd—whose politics are generally agreeable to him—Denby bemoans her lack of seriousness. Her articles ripped the Bush administration, but they were too jokey and didn’t “come close to an adequate critique of power.” Her attacks on Hillary Clinton “seemed eager to punish Hillary for her ambitions, as if deep down she were alarmed by the idea of a woman making so great a claim for herself.” At this point, Denby seems priggish and humorless, and the reader comes close to simply telling him to lighten up, rather than explaining that Dowd is a satirist, not a sexist political scientist.

And while Denby exclaims that he “would love to take the good-guy, libertarian position” and allow the market of low-brow ideas to weed-out the cruel and profane, this opposite seems to be happening. So is it British-style libel legislation that is needed? Denby says he can’t be a libertarian on the issue, yet, elsewhere in the book, he admits that prosecuting bloggers and commenters under a hate speech–type law would offend his values as a defender of free speech. Indeed, he knows what he doesn’t like, he can identify the problem, but other than publishing a book, offers no suggestions as to how an army of Denbys might rollback the culture of snark.

The best he can offer is the hope that Obama's election will tone down the shrill and excitable corners of the Internet: “Whatever else the rise of Barack Obama means, it certainly suggests that...the college-educated...have become eager to reject shallow cynicism and to embrace hope in the public sphere—and...to take power and change the tone of public discourse.”

But snark predated George Bush and it will surely exist after George Bush. As the author Colson Whitehead recently put it, “Something bad happens, like 9/11, it’s the death of irony. Something good happens, like Obama’s win, it’s the death of irony.” Or the death of snark. But snark, like irony, isn't going anywhere, and will likely continue to fuel many more Michael Kinsley-hosted Singing Fish Satay dinners for years to come.

Michael Moynihan is a senior editor of Reason

Saturday, April 25, 2009

The Revenge of Geography


People and ideas influence events, but geography largely determines them. To understand the coming struggles, it's time to dust off the Victorian thinkers who knew the physical world best.(Article from foreignpolicy.com)A journalist who has covered the ends of the Earth offers a guide to the relief map—and a primer on the next phase of conflict.

The Revenge of Geography by Robert D. Kaplan
Illustration by Aaron Goodman for FP

When rapturous Germans tore down the Berlin Wall 20 years ago it symbolized far more than the overcoming of an arbitrary boundary. It began an intellectual cycle that saw all divisions, geographic and otherwise, as surmountable; that referred to “realism” and “pragmatism” only as pejoratives; and that invoked the humanism of Isaiah Berlin or the appeasement of Hitler at Munich to launch one international intervention after the next. In this way, the armed liberalism and the democracy-promoting neoconservatism of the 1990s shared the same universalist aspirations. But alas, when a fear of Munich leads to overreach the result is Vietnam—or in the current case, Iraq.

And thus began the rehabilitation of realism, and with it another intellectual cycle. “Realist” is now a mark of respect, “neocon” a term of derision. The Vietnam analogy has vanquished that of Munich. Thomas Hobbes, who extolled the moral benefits of fear and saw anarchy as the chief threat to society, has elbowed out Isaiah Berlin as the philosopher of the present cycle. The focus now is less on universal ideals than particular distinctions, from ethnicity to culture to religion. Those who pointed this out a decade ago were sneered at for being “fatalists” or “determinists.” Now they are applauded as “pragmatists.” And this is the key insight of the past two decades—that there are worse things in the world than extreme tyranny, and in Iraq we brought them about ourselves. I say this having supported the war.
Subscribe to Foreign Policy magazine now!

So now, chastened, we have all become realists. Or so we believe. But realism is about more than merely opposing a war in Iraq that we know from hindsight turned out badly. Realism means recognizing that international relations are ruled by a sadder, more limited reality than the one governing domestic affairs. It means valuing order above freedom, for the latter becomes important only after the former has been established. It means focusing on what divides humanity rather than on what unites it, as the high priests of globalization would have it. In short, realism is about recognizing and embracing those forces beyond our control that constrain human action—culture, tradition, history, the bleaker tides of passion that lie just beneath the veneer of civilization. This poses what, for realists, is the central question in foreign affairs: Who can do what to whom? And of all the unsavory truths in which realism is rooted, the bluntest, most uncomfortable, and most deterministic of all is geography.

Indeed, what is at work in the recent return of realism is the revenge of geography in the most old-fashioned sense. In the 18th and 19th centuries, before the arrival of political science as an academic specialty, geography was an honored, if not always formalized, discipline in which politics, culture, and economics were often conceived of in reference to the relief map. Thus, in the Victorian and Edwardian eras, mountains and the men who grow out of them were the first order of reality; ideas, however uplifting, were only the second.

And yet, to embrace geography is not to accept it as an implacable force against which humankind is powerless. Rather, it serves to qualify human freedom and choice with a modest acceptance of fate. This is all the more important today, because rather than eliminating the relevance of geography, globalization is reinforcing it. Mass communications and economic integration are weakening many states, exposing a Hobbesian world of small, fractious regions. Within them, local, ethnic, and religious sources of identity are reasserting themselves, and because they are anchored to specific terrains, they are best explained by reference to geography. Like the faults that determine earthquakes, the political future will be defined by conflict and instability with a similar geographic logic. The upheaval spawned by the ongoing economic crisis is increasing the relevance of geography even further, by weakening social orders and other creations of humankind, leaving the natural frontiers of the globe as the only restraint.

So we, too, need to return to the map, and particularly to what I call the “shatter zones” of Eurasia. We need to reclaim those thinkers who knew the landscape best. And we need to update their theories for the revenge of geography in our time.

If you want to understand the insights of geography, you need to seek out those thinkers who make liberal humanists profoundly uneasy—those authors who thought the map determined nearly everything, leaving little room for human agency.

One such person is the French historian Fernand Braudel, who in 1949 published The Mediterranean and the Mediterranean World in the Age of Philip II. By bringing demography and nature itself into history, Braudel helped restore geography to its proper place. In his narrative, permanent environmental forces lead to enduring historical trends that preordain political events and regional wars. To Braudel, for example, the poor, precarious soils along the Mediterranean, combined with an uncertain, drought-afflicted climate, spurred ancient Greek and Roman conquest. In other words, we delude ourselves by thinking that we control our own destinies. To understand the present challenges of climate change, warming Arctic seas, and the scarcity of resources such as oil and water, we must reclaim Braudel’s environmental interpretation of events.

So, too, must we reexamine the blue-water strategizing of Alfred Thayer Mahan, a U.S. naval captain and author of The Influence of Sea Power Upon History, 1660-1783. Viewing the sea as the great “commons” of civilization, Mahan thought that naval power had always been the decisive factor in global political struggles. It was Mahan who, in 1902, coined the term “Middle East” to denote the area between Arabia and India that held particular importance for naval strategy. Indeed, Mahan saw the Indian and Pacific oceans as the hinges of geopolitical destiny, for they would allow a maritime nation to project power all around the Eurasian rim and thereby affect political developments deep into Central Asia. Mahan’s thinking helps to explain why the Indian Ocean will be the heart of geopolitical competition in the 21st century—and why his books are now all the rage among Chinese and Indian strategists.

Similarly, the Dutch-American strategist Nicholas Spykman saw the seaboards of the Indian and Pacific oceans as the keys to dominance in Eurasia and the natural means to check the land power of Russia. Before he died in 1943, while the United States was fighting Japan, Spykman predicted the rise of China and the consequent need for the United States to defend Japan. And even as the United States was fighting to liberate Europe, Spykman warned that the postwar emergence of an integrated European power would eventually become inconvenient for the United States. Such is the foresight of geographical determinism.

But perhaps the most significant guide to the revenge of geography is the father of modern geopolitics himself—Sir Halford J. Mackinder—who is famous not for a book but a single article, “The Geographical Pivot of History,” which began as a 1904 lecture to the Royal Geographical Society in London. Mackinder’s work is the archetype of the geographical discipline, and he summarizes its theme nicely: “Man and not nature initiates, but nature in large measure controls.”

His thesis is that Russia, Eastern Europe, and Central Asia are the “pivot” around which the fate of world empire revolves. He would refer to this area of Eurasia as the “heartland” in a later book. Surrounding it are four “marginal” regions of the Eurasian landmass that correspond, not coincidentally, to the four great religions, because faith, too, is merely a function of geography for Mackinder. There are two “monsoon lands”: one in the east generally facing the Pacific Ocean, the home of Buddhism; the other in the south facing the Indian Ocean, the home of Hinduism. The third marginal region is Europe, watered by the Atlantic to the west and the home of Christianity. But the most fragile of the four marginal regions is the Middle East, home of Islam, “deprived of moisture by the proximity of Africa” and for the most part “thinly peopled” (in 1904, that is).

This Eurasian relief map, and the events playing out on it at the dawn of the 20th century, are Mackinder’s subject, and the opening sentence presages its grand sweep:

When historians in the remote future come to look back on the group of centuries through which we are now passing, and see them fore-shortened, as we to-day see the Egyptian dynasties, it may well be that they will describe the last 400 years as the Columbian epoch, and will say that it ended soon after the year 1900.

Mackinder explains that, while medieval Christendom was “pent into a narrow region and threatened by external barbarism,” the Columbian age—the Age of Discovery—saw Europe expand across the oceans to new lands. Thus at the turn of the 20th century, “we shall again have to deal with a closed political system,” and this time one of “world-wide scope.”

Every explosion of social forces, instead of being dissipated in a surrounding circuit of unknown space and barbaric chaos, will [henceforth] be sharply re-echoed from the far side of the globe, and weak elements in the political and economic organism of the world will be shattered in consequence.

By perceiving that European empires had no more room to expand, thereby making their conflicts global, Mackinder foresaw, however vaguely, the scope of both world wars.

Mackinder looked at European history as “subordinate” to that of Asia, for he saw European civilization as merely the outcome of the struggle against Asiatic invasion. Europe, he writes, became the cultural phenomenon it is only because of its geography: an intricate array of mountains, valleys, and peninsulas; bounded by northern ice and a western ocean; blocked by seas and the Sahara to the south; and set against the immense, threatening flatland of Russia to the east. Into this confined landscape poured a succession of nomadic, Asian invaders from the naked steppe. The union of Franks, Goths, and Roman provincials against these invaders produced the basis for modern France. Likewise, other European powers originated, or at least matured, through their encounters with Asian nomads. Indeed, it was the Seljuk Turks’ supposed ill treatment of Christian pilgrims in Jerusalem that ostensibly led to the Crusades, which Mackinder considers the beginning of Europe’s collective modern history.

Russia, meanwhile, though protected by forest glades against many a rampaging host, nevertheless fell prey in the 13th century to the Golden Horde of the Mongols. These invaders decimated and subsequently changed Russia. But because most of Europe knew no such level of destruction, it was able to emerge as the world’s political cockpit, while Russia was largely denied access to the European Renaissance. The ultimate land-based empire, with few natural barriers against invasion, Russia would know forevermore what it was like to be brutally conquered. As a result, it would become perennially obsessed with expanding and holding territory.

Key discoveries of the Columbian epoch, Mackinder writes, only reinforced the cruel facts of geography. In the Middle Ages, the peoples of Europe were largely confined to the land. But when the sea route to India was found around the Cape of Good Hope, Europeans suddenly had access to the entire rimland of southern Asia, to say nothing of strategic discoveries in the New World. While Western Europeans “covered the ocean with their fleets,” Mackinder tells us, Russia was expanding equally impressively on land, “emerging from her northern forests” to police the steppe with her Cossacks, sweeping into Siberia, and sending peasants to sow the southwestern steppe with wheat. It was an old story: Europe versus Russia, a liberal sea power (like Athens and Venice) against a reactionary land power (like Sparta and Prussia). For the sea, beyond the cosmopolitan influences it bestows by virtue of access to distant harbors, provides the inviolate border security that democracy needs to take root.

In the 19th century, Mackinder notes, the advent of steam engines and the creation of the Suez Canal increased the mobility of European sea power around the southern rim of Eurasia, just as railways were beginning to do the same for land power in the Eurasian heartland. So the struggle was set for the mastery of Eurasia, bringing Mackinder to his thesis:

As we consider this rapid review of the broader currents of history, does not a certain persistence of geographical relationship become evident? Is not the pivot region of the world’s politics that vast area of Euro-Asia which is inaccessible to ships, but in antiquity lay open to the horse-riding nomads, and is to-day about to be covered with a network of railways?

Just as the Mongols banged at, and often broke down, the gates to the marginal regions surrounding Eurasia, Russia would now play the same conquering role, for as Mackinder writes, “the geographical quantities in the calculation are more measurable and more nearly constant than the human.” Forget the czars and the commissars-yet-to-be in 1904; they are but trivia compared with the deeper tectonic forces of geography.

Mackinder’s determinism prepared us for the rise of the Soviet Union and its vast zone of influence in the second half of the 20th century, as well as for the two world wars preceding it. After all, as historian Paul Kennedy notes, these conflicts were struggles over Mackinder’s “marginal” regions, running from Eastern Europe to the Himalayas and beyond. Cold War containment strategy, moreover, depended heavily on rimland bases across the greater Middle East and the Indian Ocean. Indeed, the U.S. projection of power into Afghanistan and Iraq, and today’s tensions with Russia over the political fate of Central Asia and the Caucasus have only bolstered Mackinder’s thesis. In his article’s last paragraph, Mackinder even raises the specter of Chinese conquests of the “pivot” area, which would make China the dominant geopolitical power. Look at how Chinese migrants are now demographically claiming parts of Siberia as Russia’s political control of its eastern reaches is being strained. One can envision Mackinder’s being right yet again.

The wisdom of geographical determinism endures across the chasm of a century because it recognizes that the most profound struggles of humanity are not about ideas but about control over territory, specifically the heartland and rimlands of Eurasia. Of course, ideas matter, and they span geography. And yet there is a certain geographic logic to where certain ideas take hold. Communist Eastern Europe, Mongolia, China, and North Korea were all contiguous to the great land power of the Soviet Union. Classic fascism was a predominantly European affair. And liberalism nurtured its deepest roots in the United States and Great Britain, essentially island nations and sea powers both. Such determinism is easy to hate but hard to dismiss.

To discern where the battle of ideas will lead, we must revise Mackinder for our time. After all, Mackinder could not foresee how a century’s worth of change would redefine—and enhance—the importance of geography in today’s world. One author who did is Yale University professor Paul Bracken, who in 1999 published Fire in the East. Bracken draws a conceptual map of Eurasia defined by the collapse of time and distance and the filling of empty spaces. This idea leads him to declare a “crisis of room.” In the past, sparsely populated geography acted as a safety mechanism. Yet this is no longer the case, Bracken argues, for as empty space increasingly disappears, the very “finite size of the earth” becomes a force for instability. And as I learned at the U.S. Army’s Command and General Staff College, “attrition of the same adds up to big change.”

One force that is shrinking the map of Eurasia is technology, particularly the military applications of it and the rising power it confers on states. In the early Cold War, Asian militaries were mostly lumbering, heavy forces whose primary purpose was national consolidation. They focused inward. But as national wealth accumulated and the computer revolution took hold, Asian militaries from the oil-rich Middle East to the tiger economies of the Pacific developed full-fledged, military-civilian postindustrial complexes, with missiles and fiber optics and satellite phones. These states also became bureaucratically more cohesive, allowing their militaries to focus outward, toward other states. Geography in Eurasia, rather than a cushion, was becoming a prison from which there was no escape.

Now there is an “unbroken belt of countries,” in Bracken’s words, from Israel to North Korea, which are developing ballistic missiles and destructive arsenals. A map of these countries’ missile ranges shows a series of overlapping circles: Not only is no one safe, but a 1914-style chain reaction leading to wider war is easily conceivable. “The spread of missiles and weapons of mass destruction in Asia is like the spread of the six-shooter in the American Old West,” Bracken writes—a cheap, deadly equalizer of states.

The other force driving the revenge of geography is population growth, which makes the map of Eurasia more claustrophobic still. In the 1990s, many intellectuals viewed the 18th-century English philosopher Thomas Malthus as an overly deterministic thinker because he treated humankind as a species reacting to its physical environment, not a body of autonomous individuals. But as the years pass, and world food and energy prices fluctuate, Malthus is getting more respect. If you wander through the slums of Karachi or Gaza, which wall off multitudes of angry lumpen faithful—young men mostly—one can easily see the conflicts over scarce resources that Malthus predicted coming to pass. In three decades covering the Middle East, I have watched it evolve from a largely rural society to a realm of teeming megacities. In the next 20 years, the Arab world’s population will nearly double while supplies of groundwater will diminish.

A Eurasia of vast urban areas, overlapping missile ranges, and sensational media will be one of constantly enraged crowds, fed by rumors transported at the speed of light from one Third World megalopolis to another. So in addition to Malthus, we will also hear much about Elias Canetti, the 20th-century philosopher of crowd psychology: the phenomenon of a mass of people abandoning their individuality for an intoxicating collective symbol. It is in the cities of Eurasia principally where crowd psychology will have its greatest geopolitical impact. Alas, ideas do matter. And it is the very compression of geography that will provide optimum breeding grounds for dangerous ideologies and channels for them to spread.

All of this requires major revisions to Mackinder’s theories of geopolitics. For as the map of Eurasia shrinks and fills up with people, it not only obliterates the artificial regions of area studies; it also erases Mackinder’s division of Eurasia into a specific “pivot” and adjacent “marginal” zones. Military assistance from China and North Korea to Iran can cause Israel to take military actions. The U.S. Air Force can attack landlocked Afghanistan from Diego Garcia, an island in the middle of the Indian Ocean. The Chinese and Indian navies can project power from the Gulf of Aden to the South China Sea—out of their own regions and along the whole rimland. In short, contra Mackinder, Eurasia has been reconfigured into an organic whole.

The map’s new seamlessness can be seen in the Pakistani outpost of Gwadar. There, on the Indian Ocean, near the Iranian border, the Chinese have constructed a spanking new deep-water port. Land prices are booming, and people talk of this still sleepy fishing village as the next Dubai, which may one day link towns in Central Asia to the burgeoning middle-class fleshpots of India and China through pipelines, supertankers, and the Strait of Malacca. The Chinese also have plans for developing other Indian Ocean ports in order to transport oil by pipelines directly into western and central China, even as a canal and land bridge are possibly built across Thailand’s Isthmus of Kra. Afraid of being outflanked by the Chinese, the Indians are expanding their own naval ports and strengthening ties with both Iran and Burma, where the Indian-Chinese rivalry will be fiercest.

These deepening connections are transforming the Middle East, Central Asia, and the Indian and Pacific oceans into a vast continuum, in which the narrow and vulnerable Strait of Malacca will be the Fulda Gap of the 21st century. The fates of the Islamic Middle East and Islamic Indonesia are therefore becoming inextricable. But it is the geographic connections, not religious ones, that matter most.

This new map of Eurasia—tighter, more integrated, and more crowded—will be even less stable than Mackinder thought. Rather than heartlands and marginal zones that imply separateness, we will have a series of inner and outer cores that are fused together through mass politics and shared paranoia. In fact, much of Eurasia will eventually be as claustrophobic as Israel and the Palestinian territories, with geography controlling everything and no room to maneuver. Although Zionism shows the power of ideas, the battle over land between Israelis and Palestinians is a case of utter geographical determinism. This is Eurasia’s future as well.

The ability of states to control events will be diluted, in some cases destroyed. Artificial borders will crumble and become more fissiparous, leaving only rivers, deserts, mountains, and other enduring facts of geography. Indeed, the physical features of the landscape may be the only reliable guides left to understanding the shape of future conflict. Like rifts in the Earth’s crust that produce physical instability, there are areas in Eurasia that are more prone to conflict than others. These “shatter zones” threaten to implode, explode, or maintain a fragile equilibrium. And not surprisingly, they fall within that unstable inner core of Eurasia: the greater Middle East, the vast way station between the Mediterranean world and the Indian subcontinent that registers all the primary shifts in global power politics.

This inner core, for Mackinder, was the ultimate unstable region. And yet, writing in an age before oil pipelines and ballistic missiles, he saw this region as inherently volatile, geographically speaking, but also somewhat of a secondary concern. A century’s worth of technological advancement and population explosion has rendered the greater Middle East no less volatile but dramatically more relevant, and where Eurasia is most prone to fall apart now is in the greater Middle East’s several shatter zones.

The Indian subcontinent is one such shatter zone. It is defined on its landward sides by the hard geographic borders of the Himalayas to the north, the Burmese jungle to the east, and the somewhat softer border of the Indus River to the west. Indeed, the border going westward comes in three stages: the Indus; the unruly crags and canyons that push upward to the shaved wastes of Central Asia, home to the Pashtun tribes; and, finally, the granite, snow-mantled massifs of the Hindu Kush, transecting Afghanistan itself. Because these geographic impediments are not contiguous with legal borders, and because barely any of India’s neighbors are functional states, the current political organization of the subcontinent should not be taken for granted. You see this acutely as you walk up to and around any of these land borders, the weakest of which, in my experience, are the official ones—a mere collection of tables where cranky bureaucrats inspect your luggage. Especially in the west, the only border that lives up to the name is the Hindu Kush, making me think that in our own lifetimes the whole semblance of order in Pakistan and southeastern Afghanistan could unravel, and return, in effect, to vague elements of greater India.

In Nepal, the government barely controls the countryside where 85 percent of its people live. Despite the aura bequeathed by the Himalayas, nearly half of Nepal’s population lives in the dank and humid lowlands along the barely policed border with India. Driving throughout this region, it appears in many ways indistinguishable from the Ganges plain. If the Maoists now ruling Nepal cannot increase state capacity, the state itself could dissolve.

The same holds true for Bangladesh. Even more so than Nepal, it has no geographic defense to marshal as a state. The view from my window during a recent bus journey was of the same ruler-flat, aquatic landscape of paddy fields and scrub on both sides of the line with India. The border posts are disorganized, ramshackle affairs. This artificial blotch of territory on the Indian subcontinent could metamorphose yet again, amid the gale forces of regional politics, Muslim extremism, and nature itself.

Like Pakistan, no Bangladeshi government, military or civilian, has ever functioned even remotely well. Millions of Bangladeshi refugees have already crossed the border into India illegally. With 150 million people—a population larger than Russia—crammed together at sea level, Bangladesh is vulnerable to the slightest climatic variation, never mind the changes caused by global warming. Simply because of its geography, tens of millions of people in Bangladesh could be inundated with salt water, necessitating the mother of all humanitarian relief efforts. In the process, the state itself could collapse.

Of course, the worst nightmare on the subcontinent is Pakistan, whose dysfunction is directly the result of its utter lack of geographic logic. The Indus should be a border of sorts, but Pakistan sits astride both its banks, just as the fertile and teeming Punjab plain is bisected by the India-Pakistan border. Only the Thar Desert and the swamps to its south act as natural frontiers between Pakistan and India. And though these are formidable barriers, they are insufficient to frame a state composed of disparate, geographically based, ethnic groups—Punjabis, Sindhis, Baluchis, and Pashtuns—for whom Islam has provided insufficient glue to hold them together. All the other groups in Pakistan hate the Punjabis and the army they control, just as the groups in the former Yugoslavia hated the Serbs and the army they controlled. Pakistan’s raison d’être is that it supposedly provides a homeland for subcontinental Muslims, but 154 million of them, almost the same number as the entire population of Pakistan, live over the border in India.

To the west, the crags and canyons of Pakistan’s North-West Frontier Province, bordering Afghanistan, are utterly porous. Of all the times I crossed the Pakistan-Afghanistan border, I never did so legally. In reality, the two countries are inseparable. On both sides live the Pashtuns. The wide belt of territory between the Hindu Kush mountains and the Indus River is really Pashtunistan, an entity that threatens to emerge were Pakistan to fall apart. That would, in turn, lead to the dissolution of Afghanistan.

The Taliban constitute merely the latest incarnation of Pashtun nationalism. Indeed, much of the fighting in Afghanistan today occurs in Pashtunistan: southern and eastern Afghanistan and the tribal areas of Pakistan. The north of Afghanistan, beyond the Hindu Kush, has seen less fighting and is in the midst of reconstruction and the forging of closer links to the former Soviet republics in Central Asia, inhabited by the same ethnic groups that populate northern Afghanistan. Here is the ultimate world of Mackinder, of mountains and men, where the facts of geography are asserted daily, to the chagrin of U.S.-led forces—and of India, whose own destiny and borders are hostage to what plays out in the vicinity of the 20,000-foot wall of the Hindu Kush.

Another shatter zone is the Arabian Peninsula. The vast tract of land controlled by the Saudi royal family is synonymous with Arabia in the way that India is synonymous with the subcontinent. But while India is heavily populated throughout, Saudi Arabia constitutes a geographically nebulous network of oases separated by massive waterless tracts. Highways and domestic air links are crucial to Saudi Arabia’s cohesion. Though India is built on an idea of democracy and religious pluralism, Saudi Arabia is built on loyalty to an extended family. But while India is virtually surrounded by troubling geography and dysfunctional states, Saudi Arabia’s borders disappear into harmless desert to the north and are shielded by sturdy, well-governed, self-contained sheikhdoms to the east and southeast.

Where Saudi Arabia is truly vulnerable, and where the shatter zone of Arabia is most acute, is in highly populous Yemen to the south. Although it has only a quarter of Saudi Arabia’s land area, Yemen’s population is almost as large, so the all-important demographic core of the Arabian Peninsula is crammed into its mountainous southwest corner, where sweeping basalt plateaus, rearing up into sand-castle formations and volcanic plugs, embrace a network of oases densely inhabited since antiquity. Because the Turks and the British never really controlled Yemen, they did not leave behind the strong bureaucratic institutions that other former colonies inherited.

When I traveled the Saudi-Yemen border some years back, it was crowded with pickup trucks filled with armed young men, loyal to this sheikh or that, while the presence of the Yemeni government was negligible. Mud-brick battlements hid the encampments of these rebellious sheikhs, some with their own artillery. Estimates of the number of firearms in Yemen vary, but any Yemeni who wants a weapon can get one easily. Meanwhile, groundwater supplies will last no more than a generation or two.

I’ll never forget what a U.S. military expert told me in the capital, Sanaa: “Terrorism is an entrepreneurial activity, and in Yemen you’ve got over 20 million aggressive, commercial-minded, and well-armed people, all extremely hard-working compared with the Saudis next door. It’s the future, and it terrifies the hell out of the government in Riyadh.” The future of teeming, tribal Yemen will go a long way to determining the future of Saudi Arabia. And geography, not ideas, has everything to do with it.

The Fertile Crescent, wedged between the Mediterranean Sea and the Iranian plateau, constitutes another shatter zone. The countries of this region—Jordan, Lebanon, Syria, and Iraq—are vague geographic expressions that had little meaning before the 20th century. When the official lines on the map are removed, we find a crude finger-painting of Sunni and Shiite clusters that contradict national borders. Inside these borders, the governing authorities of Lebanon and Iraq barely exist. The one in Syria is tyrannical and fundamentally unstable; the one in Jordan is rational but under quiet siege. (Jordan’s main reason for being at all is to act as a buffer for other Arab regimes that fear having a land border with Israel.) Indeed, the Levant is characterized by tired authoritarian regimes and ineffective democracies.

Of all the geographically illogical states in the Fertile Crescent, none is more so than Iraq. Saddam Hussein’s tyranny, by far the worst in the Arab world, was itself geographically determined: Every Iraqi dictator going back to the first military coup in 1958 had to be more repressive than the previous one just to hold together a country with no natural borders that seethes with ethnic and sectarian consciousness. The mountains that separate Kurdistan from the rest of Iraq, and the division of the Mesopotamian plain between Sunnis in the center and Shiites in the south, may prove more pivotal to Iraq’s stability than the yearning after the ideal of democracy. If democracy doesn’t in fairly short order establish sturdy institutional roots, Iraq’s geography will likely lead it back to tyranny or anarchy again.

But for all the recent focus on Iraq, geography and history tell us that Syria might be at the real heart of future turbulence in the Arab world. Aleppo in northern Syria is a bazaar city with greater historical links to Mosul, Baghdad, and Anatolia than to Damascus. Whenever Damascus’s fortunes declined with the rise of Baghdad to the east, Aleppo recovered its greatness. Wandering through the souks of Aleppo, it is striking how distant and irrelevant Damascus seems: The bazaars are dominated by Kurds, Turks, Circassians, Arab Christians, Armenians, and others, unlike the Damascus souk, which is more a world of Sunni Arabs. As in Pakistan and the former Yugoslavia, each sect and religion in Syria has a specific location. Between Aleppo and Damascus is the increasingly Islamist Sunni heartland. Between Damascus and the Jordanian border are the Druse, and in the mountain stronghold contiguous with Lebanon are the Alawites—both remnants of a wave of Shiism from Persia and Mesopotamia that swept over Syria a thousand years ago.

Elections in Syria in 1947, 1949, and 1954 exacerbated these divisions by polarizing the vote along sectarian lines. The late Hafez al-Assad came to power in 1970 after 21 changes of government in 24 years. For three decades, he was the Leonid Brezhnev of the Arab world, staving off the future by failing to build a civil society at home. His son Bashar will have to open the political system eventually, if only to keep pace with a dynamically changing society armed with satellite dishes and the Internet. But no one knows how stable a post-authoritarian Syria would be. Policymakers must fear the worst. Yet a post-Assad Syria may well do better than post-Saddam Iraq, precisely because its tyranny has been much less severe. Indeed, traveling from Saddam’s Iraq to Assad’s Syria was like coming up for air.

In addition to its inability to solve the problem of political legitimacy, the Arab world is unable to secure its own environment. The plateau peoples of Turkey will dominate the Arabs in the 21st century because the Turks have water and the Arabs don’t. Indeed, to develop its own desperately poor southeast and thereby suppress Kurdish separatism, Turkey will need to divert increasingly large amounts of the Euphrates River from Syria and Iraq. As the Middle East becomes a realm of parched urban areas, water will grow in value relative to oil. The countries with it will retain the ability—and thus the power—to blackmail those without it. Water will be like nuclear energy, thereby making desalinization and dual-use power facilities primary targets of missile strikes in future wars. Not just in the West Bank, but everywhere there is less room to maneuver.

A final shatter zone is the Persian core, stretching from the Caspian Sea to Iran’s north to the Persian Gulf to its south. Virtually all of the greater Middle East’s oil and natural gas lies in this region. Just as shipping lanes radiate from the Persian Gulf, pipelines are increasingly radiating from the Caspian region to the Mediterranean, the Black Sea, China, and the Indian Ocean. The only country that straddles both energy-producing areas is Iran, as Geoffrey Kemp and Robert E. Harkavy note in Strategic Geography and the Changing Middle East. The Persian Gulf possesses 55 percent of the world’s crude-oil reserves, and Iran dominates the whole gulf, from the Shatt al-Arab on the Iraqi border to the Strait of Hormuz in the southeast—a coastline of 1,317 nautical miles, thanks to its many bays, inlets, coves, and islands that offer plenty of excellent places for hiding tanker-ramming speedboats.

It is not an accident that Iran was the ancient world’s first superpower. There was a certain geographic logic to it. Iran is the greater Middle East’s universal joint, tightly fused to all of the outer cores. Its border roughly traces and conforms to the natural contours of the landscape—plateaus to the west, mountains and seas to the north and south, and desert expanse in the east toward Afghanistan. For this reason, Iran has a far more venerable record as a nation-state and urbane civilization than most places in the Arab world and all the places in the Fertile Crescent. Unlike the geographically illogical countries of that adjacent region, there is nothing artificial about Iran. Not surprisingly, Iran is now being wooed by both India and China, whose navies will come to dominate the Eurasian sea lanes in the 21st century.

Of all the shatter zones in the greater Middle East, the Iranian core is unique: The instability Iran will cause will not come from its implosion, but from a strong, internally coherent Iranian nation that explodes outward from a natural geographic platform to shatter the region around it. The security provided to Iran by its own natural boundaries has historically been a potent force for power projection. The present is no different. Through its uncompromising ideology and nimble intelligence services, Iran runs an unconventional, postmodern empire of substate entities in the greater Middle East: Hamas in Palestine, Hezbollah in Lebanon, and the Sadrist movement in southern Iraq. If the geographic logic of Iranian expansion sounds eerily similar to that of Russian expansion in Mackinder’s original telling, it is.

The geography of Iran today, like that of Russia before, determines the most realistic strategy to securing this shatter zone: containment. As with Russia, the goal of containing Iran must be to impose pressure on the contradictions of the unpopular, theocratic regime in Tehran, such that it eventually changes from within. The battle for Eurasia has many, increasingly interlocking fronts. But the primary one is for Iranian hearts and minds, just as it was for those of Eastern Europeans during the Cold War. Iran is home to one of the Muslim world’s most sophisticated populations, and traveling there, one encounters less anti-Americanism and anti-Semitism than in Egypt. This is where the battle of ideas meets the dictates of geography.

***

In this century’s fight for Eurasia, like that of the last century, Mackinder’s axiom holds true: Man will initiate, but nature will control. Liberal universalism and the individualism of Isaiah Berlin aren’t going away, but it is becoming clear that the success of these ideas is in large measure bound and determined by geography. This was always the case, and it is harder to deny now, as the ongoing recession will likely cause the global economy to contract for the first time in six decades. Not only wealth, but political and social order, will erode in many places, leaving only nature’s frontiers and men’s passions as the main arbiters of that age-old question: Who can coerce whom? We thought globalization had gotten rid of this antiquarian world of musty maps, but now it is returning with a vengeance.

We all must learn to think like Victorians. That is what must guide and inform our newly rediscovered realism. Geographical determinists must be seated at the same honored table as liberal humanists, thereby merging the analogies of Vietnam and Munich. Embracing the dictates and limitations of geography will be especially hard for Americans, who like to think that no constraint, natural or otherwise, applies to them. But denying the facts of geography only invites disasters that, in turn, make us victims of geography.

Better, instead, to look hard at the map for ingenious ways to stretch the limits it imposes, which will make any support for liberal principles in the world far more effective. Amid the revenge of geography, that is the essence of realism and the crux of wise policymaking—working near the edge of what is possible, without slipping into the precipice.

Robert D. Kaplan is national correspondent for The Atlantic and senior fellow at the Center for a New American Security.

All the Book's a Stage by Michael Dirda


All the Book's a Stage by Michael Dirda
Thursday, March 12, 2009; C04

A STRANGE EVENTFUL HISTORY
The Dramatic Lives of Ellen Terry, Henry Irving, and Their Remarkable Families
By Michael Holroyd

Farrar Straus Giroux. 620 pp. $40

In the late 19th and early 20th centuries, Henry Irving (1838-1905) and Ellen Terry (1847-1928) reigned as the king and queen of the English stage.

Terry, said Irving's longtime manager, "moved through the world of the theatre like embodied sunshine." As a young woman, she was painted by such Victorian eminences as Whistler, John Singer Sargent and the once equally celebrated G.F. Watts (to whom she was briefly married, albeit without any of what that hypersensitive painter euphemistically called "violent love"). As Mrs. Watts, she visited the poet Tennyson, at whose house, Farringford, she was immortalized by Julia Margaret Cameron in what has been called one of the "most beautiful and remarkable pictures in the history of photography." Playwright George Bernard Shaw professed his undying passion for her -- but preferred to conduct their love affair entirely by letter. They didn't meet for years. So great was her fame and beauty that young men would say to their sweethearts: "As there's no chance of Ellen Terry marrying me, will you?"

As for Irving: He was absolutely electrifying on the stage, a dark, magnetic presence that drew all eyes, whether he was Hamlet or Shylock, Mephistopheles or Thomas à Becket. He even served as the partial model for the charismatic protagonist of an 1897 "shocker" written by that above-mentioned stage manager, one Bram Stoker: It was called "Dracula." Known as "the Chief" to his well-paid staff and company at the Lyceum Theatre, Irving became the first actor ever to be knighted.

In this group biography of Terry, Irving and their families, Michael Holroyd -- well known for his lives of Lytton Strachey and Shaw -- has produced the most completely delicious, the most civilized and the most wickedly entertaining work of nonfiction anyone could ask for. I have no particular interest in theatrical history, but Holroyd's verve -- his dramatic sense for the comic and the tragic -- is irresistible. The book's chapters are pleasingly short, its prose crisp and fast-moving, and every page is packed with bizarre doings, eccentric characters, surprising factoids and a stream of lively and scandalous anecdotes.

Terry came from an acting family. Her parents were roving showmen, and nearly all the children were expected to tread the boards. Ellen's older sister Kate was the first "Terry of the age" but gave up her career to marry. At her last, thunderously acclaimed performance as Juliet, the specially commissioned "Kate Terry Valse" was played at the command of the Prince of Wales. In her dressing room afterward her wealthy fiance presented her with a wide gold bracelet. "On the outside was engraved: 'To Kate Terry on her retirement from the stage, from him for whom she leaves it'; and on the inside, in tiny letters, were the titles of a hundred plays in which she had appeared." She was all of 23.

But this was nothing compared with the eventual fame of Ellen. At the Grand Jubilee for her 50 years onstage -- held at the Drury Lane Theatre on June 12, 1906 -- the guests included many of the most famous performers of the era: the immortal Eleonora Duse, W.S. Gilbert (of Gilbert and Sullivan), Réjane (the rival of Ellen's friend Sarah Bernhardt), Coquelin of the Comédie Française, the notorious Lillie Langtry, the actor-manager Herbert Beerbohm Tree, Mrs. Patrick Campbell (for whom Shaw wrote the part of Eliza Doolittle) and Enrico Caruso. That night 22 members of the Terry family appeared onstage, including Ellen's brother Fred, who gained world renown playing Sir Percy Blakeney, better known as "The Scarlet Pimpernel." Alas, Holroyd doesn't say if sister Kate's 2-year-old grandson was there, a young fellow by the name of John Gielgud.

While Ellen Terry grew up in the theater, Irving, by contrast, spent his childhood in a mining village in Cornwall. But John Brodribb was determined to become an actor, so he changed his name, then spent an arduous decade taking any part he could wangle with provincial acting companies. For years he was mocked for his accent, his occasional stammer, his shortsightedness and his odd "dragging gait." But the young man was indomitable. As Holroyd writes: "His apprenticeship, and then his career, became an unending struggle to master his faults in diction, to manipulate the mobile features that were evolving from a rather ordinary face and, in short, to gain perfection. By the time this apprenticeship was over and he established himself in London, he had played more than 700 characters."

Irving -- prey to melancholy and anxiety when not working -- lived for the limelight. One evening, when he was just starting his London career, the shrew he had impetuously married suddenly asked: "Are you going on making a fool of yourself like this all your life?" Irving immediately stopped the carriage in which they were riding, got out and walked off into the night. Though they never divorced, he never saw his wife again. Even Terry -- his leading lady and probably his lover -- once told him flat out "that if she suddenly dropped dead, his first emotion would be grief and his first question would be about the preparedness of her understudy -- and he did not disagree."

While Henry Irving worked hard to develop his skills, Terry was a natural, full of fun and flirtatious -- "an April kind of woman." After her annulment, she ran off with an aesthete by whom she had two illegitimate children; at the age of 60 she impetuously married a man half her age. She had little or no financial sense, and exhausted much of her fortune bailing out her two children, Edy and Ted.

These two, along with Irving's sons Harry and Laurence, form the focus of the second half of "A Strange and Eventful History." All four managed to break free of their parents and make names for themselves in the theater. Harry created the role of the radical butler in J.M. Barrie's "The Admirable Crichton." His wife, Dolly, played the original Trilby in the drama that gave us that hypnotic villain Svengali. (In later life she took the part of Mrs. Darling in a kind of children's fairy tale that no one thought would last: "Peter Pan.") Laurence, with his wife, Mabel, toured the world playing in Shaw's "Captain Brassbound's Conversion." He also once wrote a play called "Godefroi and Yolande," which deserves immortality if only for the scene direction: "Enter a chorus of lepers."

But Irving's sons died in middle age, while Ellen's children, who adopted the last name Craig, lived into the middle of the past century. When not running errands and nursing her mother, Edith Craig designed costumes, worked hard for suffragism and was a member of a lesbian circle that included the novelists Radclyffe Hall and Vita Sackville-West. Her brother, Edward Gordon Craig, grew up to become a visionary stage designer. Having inherited his mother's attractiveness and his godfather Henry Irving's charisma, Gordon Craig used them to wangle financial support from hapless patrons, charm the great Russian director Konstantin Stanislavski and seduce the famous dancer Isadora Duncan, by whom he had a daughter. Before his death at 94, this feckless Svengali fathered at least 13 children by eight different women. Holroyd paints him, with devastating irony, as a sacred monster, undeniably talented but wholly self-centered.

As the years and pages go by in "A Strange Eventful History," this long biography starts to feel increasingly Proustian: Here is the flow of life, as one generation passes into the next, as men and women struggle for fame and achievement, then surprisingly find that they have grown old. Henry Irving, who wanted to go "like that," returned one night to his hotel after a performance, slumped down in a chair and died. Ellen lingered into her 80s: "The days are so short -- I wake in the morning -- I meet a little misery -- I meet a little happiness -- I fight with one -- I greet the other -- the day is gone." And toward his end, Gordon Craig told visitors, "I was very honoured when our Queen made me . . . whatever it was." Enough. "A Strange Eventful History" is a wonderful book, deserving applause, bouquets and a rave review in this morning's paper.

Michael Dirda -- mdirda@gmail.com

Thursday, April 16, 2009

Haha! Strunk & White's Elements of Style gets thrashed!

50 Years of Stupid Grammar Advice
by GEOFFREY K. PULLUM

April 16 is the 50th anniversary of the publication of a little book that is loved and admired throughout American academe. Celebrations, readings, and toasts are being held, and a commemorative edition has been released.

I won't be celebrating.

The Elements of Style does not deserve the enormous esteem in which it is held by American college graduates. Its advice ranges from limp platitudes to inconsistent nonsense. Its enormous influence has not improved American students' grasp of English grammar; it has significantly degraded it.

The authors won't be hurt by these critical remarks. They are long dead. William Strunk was a professor of English at Cornell about a hundred years ago, and E.B. White, later the much-admired author of Charlotte's Web, took English with him in 1919, purchasing as a required text the first edition, which Strunk had published privately. After Strunk's death, White published a New Yorker article reminiscing about him and was asked by Macmillan to revise and expand Elements for commercial publication. It took off like a rocket (in 1959) and has sold millions.

This was most unfortunate for the field of English grammar, because both authors were grammatical incompetents. Strunk had very little analytical understanding of s yntax, White even less. Certainly White was a fine writer, but he was not qualified as a grammarian. Despite the post-1957 explosion of theoretical linguistics, Elements settled in as the primary vehicle through which grammar was taught to college students and presented to the general public, and the subject was stuck in the doldrums for the rest of the 20th century.

Notice what I am objecting to is not the style advice in Elements, which might best be described the way The Hitchhiker's Guide to the Galaxy describes Earth: mostly harmless. Some of the recommendations are vapid, like "Be clear" (how could one disagree?). Some are tautologous, like "Do not explain too much." (Explaining too much means explaining more than you should, so of course you shouldn't.) Many are useless, like "Omit needless words." (The students who know which words are needless don't need the instruction.) Even so, it doesn't hurt to lay such well-meant maxims before novice writers.

Even the truly silly advice, like "Do not inject opinion," doesn't really do harm. (No force on earth can prevent undergraduates from injecting opinion. And anyway, sometimes that is just what we want from them.) But despite the "Style" in the title, much in the book relates to grammar, and the advice on that topic does real damage. It is atrocious. Since today it provides just about all of the grammar instruction most Americans ever get, that is something of a tragedy. Following the platitudinous style recommendations of Elements would make your writing better if you knew how to follow them, but that is not true of the grammar stipulations.

"Use the active voice" is a typical section head. And the section in question opens with an attempt to discredit passive clauses that is either grammatically misguided or disingenuous.

We are told that the active clause "I will always remember my first trip to Boston" sounds much better than the corresponding passive "My first visit to Boston will always be remembered by me." It sure does. But that's because a passive is always a stylistic train wreck when the subject refers to something newer and less established in the discourse than the agent (the noun phrase that follows "by").

For me to report that I paid my bill by saying "The bill was paid by me," with no stress on "me," would sound inane. (I'm the utterer, and the utterer always counts as familiar and well established in the discourse.) But that is no argument against passives generally. "The bill was paid by an anonymous benefactor" sounds perfectly natural. Strunk and White are denigrating the passive by presenting an invented example of it deliberately designed to sound inept.

After this unpromising start, there is some fairly sensible style advice: The authors explicitly say they do not mean "that the writer should entirely discard the passive voice," which is "frequently convenient and sometimes necessary." They give good examples to show that the choice between active and passive may depend on the topic under discussion.

Sadly, writing tutors tend to ignore this moderation, and simply red-circle everything that looks like a passive, just as Microsoft Word's grammar checker underlines every passive in wavy green to signal that you should try to get rid of it. That overinterpretation is part of the damage that Strunk and White have unintentionally done. But it is not what I am most concerned about here.

What concerns me is that the bias against the passive is being retailed by a pair of authors so grammatically clueless that they don't know what is a passive construction and what isn't. Of the four pairs of examples offered to show readers what to avoid and how to correct it, a staggering three out of the four are mistaken diagnoses. "At dawn the crowing of a rooster could be heard" is correctly identified as a passive clause, but the other three are all errors:

* "There were a great number of dead leaves lying on the ground" has no sign of the passive in it anywhere.
* "It was not long before she was very sorry that she had said what she had" also contains nothing that is even reminiscent of the passive construction.
* "The reason that he left college was that his health became impaired" is presumably fingered as passive because of "impaired," but that's a mistake. It's an adjective here. "Become" doesn't allow a following passive clause. (Notice, for example, that "A new edition became issued by the publishers" is not grammatical.)

These examples can be found all over the Web in study guides for freshman composition classes. (Try a Google search on "great number of dead leaves lying.") I have been told several times, by both students and linguistics-faculty members, about writing instructors who think every occurrence of "be" is to be condemned for being "passive." No wonder, if Elements is their grammar bible. It is typical for college graduates today to be unable to distinguish active from passive clauses. They often equate the grammatical notion of being passive with the semantic one of not specifying the agent of an action. (They think "a bus exploded" is passive because it doesn't say whether terrorists did it.)

The treatment of the passive is not an isolated slip. It is typical of Elements. The book's toxic mix of purism, atavism, and personal eccentricity is not underpinned by a proper grounding in English grammar. It is often so misguided that the authors appear not to notice their own egregious flouting of its own rules. They can't help it, because they don't know how to identify what they condemn.

"Put statements in positive form," they stipulate, in a section that seeks to prevent "not" from being used as "a means of evasion."

"Write with nouns and verbs, not with adjectives and adverbs," they insist. (The motivation of this mysterious decree remains unclear to me.)

And then, in the very next sentence, comes a negative passive clause containing three adjectives: "The adjective hasn't been built that can pull a weak or inaccurate noun out of a tight place."

That's actually not just three strikes, it's four, because in addition to contravening "positive form" and "active voice" and "nouns and verbs," it has a relative clause ("that can pull") removed from what it belongs with (the adjective), which violates another edict: "Keep related words together."

"Keep related words together" is further explained in these terms: "The subject of a sentence and the principal verb should not, as a rule, be separated by a phrase or clause that can be transferred to the beginning." That is a negative passive, containing an adjective, with the subject separated from the principal verb by a phrase ("as a rule") that could easily have been transferred to the beginning. Another quadruple violation.

The book's contempt for its own grammatical dictates seems almost willful, as if the authors were flaunting the fact that the rules don't apply to them. But I don't think they are. Given the evidence that they can't even tell actives from passives, my guess would be that it is sheer ignorance. They know a few terms, like "subject" and "verb" and "phrase," but they do not control them well enough to monitor and analyze the structure of what they write.

There is of course nothing wrong with writing passives and negatives and adjectives and adverbs. I'm not nitpicking the authors' writing style. White, in particular, often wrote beautifully, and his old professor would have been proud of him. What's wrong is that the grammatical advice proffered in Elements is so misplaced and inaccurate that counterexamples often show up in the authors' own prose on the very same page.

Some of the claims about syntax are plainly false despite being respected by the authors. For example, Chapter IV, in an unnecessary piece of bossiness, says that the split infinitive "should be avoided unless the writer wishes to place unusual stress on the adverb." The bossiness is unnecessary because the split infinitive has always been grammatical and does not need to be avoided. (The authors actually knew that. Strunk's original version never even mentioned split infinitives. White added both the above remark and the further reference, in Chapter V, admitting that "some infinitives seem to improve on being split.") But what interests me here is the descriptive claim about stress on the adverb. It is completely wrong.

Tucking the adverb in before the verb actually de-emphasizes the adverb, so a sentence like "The dean's statements tend to completely polarize the faculty" places the stress on polarizing the faculty. The way to stress the completeness of the polarization would be to write, "The dean's statements tend to polarize the faculty completely."

This is actually implied by an earlier section of the book headed "Place the emphatic words of a sentence at the end," yet White still gets it wrong. He feels there are circumstances where the split infinitive is not quite right, but he is simply not competent to spell out his intuition correctly in grammatical terms.

An entirely separate kind of grammatical inaccuracy in Elements is the mismatch with readily available evidence. Simple experiments (which students could perform for themselves using downloaded classic texts from sources like http://gutenberg.org) show that Strunk and White preferred to base their grammar claims on intuition and prejudice rather than established literary usage.

Consider the explicit instruction: "With none, use the singular verb when the word means 'no one' or 'not one.'" Is this a rule to be trusted? Let's investigate.

* Try searching the script of Oscar Wilde's The Importance of Being Earnest (1895) for "none of us." There is one example of it as a subject: "None of us are perfect" (spoken by the learned Dr. Chasuble). It has plural agreement.
* Download and search Bram Stoker's Dracula (1897). It contains no cases of "none of us" with singular-inflected verbs, but one that takes the plural ("I think that none of us were surprised when we were asked to see Mrs. Harker a little before the time of sunset").
* Examine the text of Lucy Maud Montgomery's popular novel Anne of Avonlea (1909). There are no singular examples, but one with the plural ("None of us ever do").

It seems to me that the stipulation in Elements is totally at variance not just with modern conversational English but also with literary usage back when Strunk was teaching and White was a boy.

Is the intelligent student supposed to believe that Stoker, Wilde, and Montgomery didn't know how to write? Did Strunk or White check even a single book to see what the evidence suggested? Did they have any evidence at all for the claim that the cases with plural agreement are errors? I don't think so.

There are many other cases of Strunk and White's being in conflict with readily verifiable facts about English. Consider the claim that a sentence should not begin with "however" in its connective adverb sense ("when the meaning is 'nevertheless'").

Searching for "however" at the beginnings of sentences and "however" elsewhere reveals that good authors alternate between placing the adverb first and placing it after the subject. The ratios vary. Mark Liberman, of the University of Pennsylvania, checked half a dozen of Mark Twain's books and found roughly seven instances of "however" at the beginning of a sentence for each three placed after the subject, whereas in five selected books by Henry James, the ratio was one to 15. In Dracula I found a ratio of about one to five. The evidence cannot possibly support a claim that "however" at the beginning of a sentence should be eschewed. Strunk and White are just wrong about the facts of English syntax.

The copy editor's old bugaboo about not using "which" to introduce a restrictive relative clause is also an instance of failure to look at the evidence. Elements as revised by White endorses that rule. But 19th-century authors whose prose was never forced through a 20th-century prescriptive copy-editing mill generally alternated between "which" and "that." (There seems to be a subtle distinction in meaning related to whether new information is being introduced.) There was never a period in the history of English when "which" at the beginning of a restrictive relative clause was an error.

In fact, as Jan Freeman, of The Boston Globe, noted (in her blog, The Word), Strunk himself used "which" in restrictive relative clauses. White not only added the anti-"which" rule to the book but also revised away the counterexamples that were present in his old professor's original text!

It's sad. Several generations of college students learned their grammar from the uninformed bossiness of Strunk and White, and the result is a nation of educated people who know they feel vaguely anxious and insecure whenever they write "however" or "than me" or "was" or "which," but can't tell you why. The land of the free in the grip of The Elements of Style.

So I won't be spending the month of April toasting 50 years of the overopinionated and underinformed little book that put so many people in this unhappy state of grammatical angst. I've spent too much of my scholarly life studying English grammar in a serious way. English syntax is a deep and interesting subject. It is much too important to be reduced to a bunch of trivial don't-do-this prescriptions by a pair of idiosyncratic bumblers who can't even tell when they've broken their own misbegotten rules.

Geoffrey K. Pullum is head of linguistics and English language at the University of Edinburgh and co-author (with Rodney Huddleston) of The Cambridge Grammar of the English Language (Cambridge University Press, 2002).

Wednesday, March 25, 2009

How to Procrastinate Like Leonardo da Vinci

How to Procrastinate Like Leonardo da Vinci
by W.A. Pannapacker

"Dimmi, dimmi se mai fu fatta cosa alcuna." ("Tell me, tell me if anything ever got done.")

— Attributed to Leonardo

On his deathbed, they say, Leonardo da Vinci regretted that he had left so much unfinished.

Leonardo had so many ideas; he was so ahead of his time. His notebooks were crammed with inventions: new kinds of clocks, a double-hulled ship, flying machines, military tanks, an odometer, the parachute, and a machine gun, to name just a few. If you wanted a new high-tech weapon, a gigantic bronze statue, or a method for moving a river, Leonardo could devise something that just might work.

But Leonardo rarely completed any of the great projects that he sketched in his notebooks. His groundbreaking research in human anatomy resulted in no publications — at least not in his lifetime. Not only did Leonardo fail to realize his potential as an engineer and a scientist, but he also spent his career hounded by creditors to whom he owed paintings and sculptures for which he had accepted payment but — for some reason — could not deliver, even when his deadline was extended by years. His surviving paintings amount to no more than 20, and five or six, including the "Mona Lisa," were still in his possession when he died. Apparently, he was still tinkering with them.

Nowadays, Leonardo might have been hired by a top research university, but it seems likely that he would have been denied tenure. He had lots of notes but relatively little to put in his portfolio.

Leonardo was the kind of person we have come to call a "genius." But he had trouble focusing for long periods on a single project. After he solved its conceptual problems, Leonardo lost interest until someone forced his hand. Even then, Leonardo often became a perfectionist about details that no one else could see, and the job just didn't get done.

A friar named Sabba di Castiglione said of Leonardo, "When he ought to have attended to painting in which no doubt he would have proved a new Appelles, he gave himself entirely to geometry, architecture, and anatomy." Leonardo worked on what interested him at the moment, cultivating his energies and insights, even when those activities were not directly related to his current commissions.

Leonardo, it seems, was a hopeless procrastinator. Or that's what we are supposed to believe, following the narrative started by his earliest biographer, Giorgio Vasari, and continued in the sermons of today's anti-procrastination therapists and motivational speakers. Leonardo, you see, was "afraid of success," so he never really gave his best effort. There was no chance of failure that way. Better to "self-sabotage" than to come up short.

Of course, the therapeutic interpretation of Leonardo — and, perhaps, of many of us in academe who emulate his pattern of seemingly nonproductive creativity — has a long history. Leonardo's reputation spread at exactly the right time for someone to become a symbol of this newly invented moral and psychological disorder: procrastination, a word that sounds just a little too much like what Victorian moralists used to call "self-abuse."

The unambiguously negative idea of procrastination seems unique to the Western world; that is, to Europeans and the places they have colonized in the last 500 years or so. It is a reflection of several historical processes in the years after the discovery of the New World: the Protestant Reformation, the spread of capitalist economics, the Industrial Revolution, the rise of the middle classes, and the growth of the nation-state. As any etymologist will tell you, words are battlegrounds for contending historical processes, and dictionaries are among the best chronicles of those struggles.

The magisterial Oxford English Dictionary presents a wide range of connotations for "procrastinate," ranging from the innocuous "to postpone" to the more negative "to postpone irrationally, obstinately, and out of sinful laziness." The earliest instances of procrastination do not carry the moral sting of the later usages. To procrastinate simply meant to delay for one reason or another, as one might reasonably delay eating dinner because it is only 3 in the afternoon. For example, in 1632 someone described "That benefite of the procrastinating of my Life." In other words, sometimes delay is good; it is a good idea — in this case — to delay the arrival of death.

Somehow it is not surprising that the first notable shift in the moral weight of the term is found in relation to business and the building of empires. In his 1624 account, The Generall Historie of Virginia, New-England, and the Summer Isles, Capt. John Smith — adventurer and founder of Jamestown — wrote of his gang of shiftless cavaliers, "Many such deuices [devices] they fained [feigned] to procrastinate the time." It was, no doubt, owing to this procrastination — not tyrannical leadership and impossible conditions — that Jamestown's early years were so unsuccessful. Eventually, Smith developed the policy of "He that will not worke shall not eate," since eating seems to be one of the few things about which one cannot procrastinate for long. It's a telling moment when procrastination becomes a crime against the state potentially punishable by death.

As time wore on, and the pace of life accelerated, the exhortations against procrastination in the English-speaking world rapidly became stronger. By 1893 we find someone not being accused of procrastination or warned against it, but accusing himself of the shameful vice: "I was too procrastinatingly lazy to expend even that amount of energy." The rhetoric of anti-procrastination — constructed by imperialists, religious zealots, and industrial capitalists — had become internalized. We no longer need to be told that to procrastinate is wrong. We know we are sinners and are ashamed. What can we do but work harder?

Like the English Romantic poet Samuel Taylor Coleridge, we live our lives with regret for what we have not done — or have done imperfectly — instead of taking satisfaction with what we have done, such as, in Coleridge's case, founding English Romanticism in his youth and producing, throughout his life, some of the best poetry and literary criticism ever composed, including his unfinished poem "Kubla Khan." But that was not enough; always, there was some magnum opus that Coleridge should have been writing, that made every smaller project seem like failure, and that led him to seek refuge from procrastinator's guilt in opium.

One thing about this dalliance with the OED is reassuring: If words emerge and evolve over time, it is possible to get behind them, to disconnect the relationship between "signifier" and "signified" so to speak. Since procrastination emerged from a specific historical context, it is not a universal and inescapable element of human experience. We can liberate ourselves from its gravitational pull of judgment, shame, and coercion. We can seize the term for ourselves and redefine it for our purposes. We can even make procrastination — like imagination — into something positive and maybe even essential for the productivity we value above all things.

In 1486, when Leonardo was still struggling with the Sforza horse, Giovanni Pico della Mirandola gave his famous "Oration on the Dignity of Man," encouraging artists to become divine creators in their own right. In this vision, God encourages Adam not to embrace human limitation but to lift himself upward into the realm of the angels.

It was this dream of human perfectibility that animated artists like Michelangelo, and, perhaps, forever rendered Leonardo unable to relinquish voluntarily any of his more serious artistic projects. As Vasari writes, "Leonardo, with his profound intelligence of art, commenced various undertakings, many of which he never completed, because it appeared to him that the hand could never give its due perfection to the object or purpose which he had in his thoughts, or beheld in his imagination." Through his many episodes of alleged procrastination, we see an artist who engages with the irresolvable conflict between unlimited aspiration and the acknowledgment of human limitation.

If Leonardo seemed endlessly distracted by his notebooks and experiments — instead of finishing the details of a painting he had already conceptualized — it was because he understood the fleeting quality of imagination: If you do not get an insight down on paper, and possibly develop it while your excitement lasts, then you are squandering the rarest and most unpredictable of your human capabilities, the very moments when one seems touched by the hand of God.

The principal evidence for that is, of course, Leonardo's notebooks. He kept those notebooks for at least 35 years, and more than 5,000 manuscript pages have survived — perhaps a third of the total — scattered in several archives and private collections. Leonardo's known writings would fill at least 20 volumes, but if one includes the lost materials, he probably wrote enough to fill a hundred.

Some of Leonardo's entries are short jottings; others are lengthy and elaborate. The notebooks give the impression of a mind always at work, even in the midst of ordinary affairs. He returned to some pages intermittently over many years, revising his thoughts and adding drawings and textual elaborations. Several compendiums have been compiled from his notebooks, but, like so many of us, Leonardo never used his voluminous private writings to produce a single published work.

For the most part, his notebooks — like the commonplace books that were kept by students in the Renaissance (Shakespeare's Hamlet had one, for example) — were a polymath's workshop: a place to try out ideas, to develop them over time, and to retain them until circumstances made them more immediately useful.

Leonardo's studies of how light strikes a sphere, for example, enable the continuous modeling of the "Mona Lisa" and "St. John the Baptist." His work in optics might have delayed a project, but his final achievements in painting depended on the experiments — physical and intellectual — that he documented in the notebooks. Far from being a distraction — like many of his contemporaries thought — they represent a lifetime of productive brainstorming, a private working out of the ideas on which his more public work depended. To criticize this work is to believe that what we call genius somehow emerges from the mind fully formed — like Athena from the head of Zeus — without considerable advance preparation. Vasari's quotation of Pope Leo X has rung down through the centuries as a classic indictment of Leonardo's procrastinatory behavior: "Alas! This man will do nothing at all, since he is thinking of the end before he has made a beginning."

If creative procrastination, selectively applied, prevented Leonardo from finishing a few commissions — of minor importance when one is struggling with the inner workings of the cosmos — then only someone who is a complete captive of the modern cult of productive mediocrity that pervades the workplace, particularly in academe, could fault him for it.

Productive mediocrity requires discipline of an ordinary kind. It is safe and threatens no one. Nothing will be changed by mediocrity; mediocrity is completely predictable. It doesn't make the powerful and self-satisfied feel insecure. It doesn't require freedom, because it doesn't do anything unexpected. Mediocrity is the opposite of what we call "genius." Mediocrity gets perfectly mundane things done on time. But genius is uncontrolled and uncontrollable. You cannot produce a work of genius according to a schedule or an outline. As Leonardo knew, it happens through random insights resulting from unforeseen combinations. Genius is inherently outside the realm of known disciplines and linear career paths. Mediocrity does exactly what it's told, like the docile factory workers envisioned by Frederick Winslow Taylor.

Like so many of us in academe, Leonardo was endlessly curious; he did not rely on received wisdom but insisted on going back to the sources, most important nature itself. Would he have achieved more if his focus had been narrower and more rigorously professional? Perhaps he might have completed more statues and altarpieces. He might have made more money. His contemporaries, such as Michelangelo, would have had fewer grounds for mocking him as an impractical eccentric. But we might not remember him now any more than we normally recall the more punctual work of dozens of other Florentine artists of his generation.

Perhaps Leonardo's greatest discovery was not the perfectibility of man but its opposite: He found that even the most profound thought combined with the most ferocious application cannot accomplish something absolutely true and beautiful. We cannot touch the face of God. But we can come close, and his work, imperfect as it may be, is one of the major demonstrations of heroic procrastination in Western history: the acceptance of our imperfection — and the refusal to accept anything less than striving for perfection anyway.

Leonardo is just one example of an individual whose meaning has been constructed, in part, to combat the vice of procrastination; namely, the natural desire to pursue what one finds most interesting and enjoyable rather than what one finds boring and repellent, simply because one's life must be at the service of some compelling interest — some established institutional practice — that is never clearly explained, lest it be challenged and rejected.

Academe is full of potential geniuses who have never done a single thing they wanted to do because there were too many things that needed to be done first: the research projects, conference papers, books and articles — not one of them freely chosen: merely means to some practical end, a career rather than a calling. And so we complete research projects that no longer interest us and write books that no one will read; or we teach with indifference, dutifully boring our students, marking our time until retirement, and slowly forgetting why we entered the profession: because something excited us so much that we subordinated every other obligation to follow it.

If there is one conclusion to be drawn from the life of Leonardo, it is that procrastination reveals the things at which we are most gifted — the things we truly want to do. Procrastination is a calling away from something that we do against our desires toward something that we do for pleasure, in that joyful state of self-forgetful inspiration that we call genius.

W.A. Pannapacker is an associate professor of English at Hope College.

What made the Greeks laugh? by Mary Beard of TLS

What made the Greeks laugh?
Mary Beard on the familiar stand-bys of ancient humour and the schoolboy antics of murderous dictators
Mary Beard

In the third century BC, when Roman ambassadors were negotiating with the Greek city of Tarentum, an ill-judged laugh put paid to any hope of peace. Ancient writers disagree about the exact cause of the mirth, but they agree that Greek laughter was the final straw in driving the Romans to war.

One account points the finger at the bad Greek of the leading Roman ambassador, Postumius. It was so ungrammatical and strangely accented that the Tarentines could not conceal their amusement. The historian Dio Cassius, by contrast, laid the blame on the Romans’ national dress. “So far from receiving them decently”, he wrote, “the Tarentines laughed at the Roman toga among other things. It was the city garb, which we use in the Forum. And the envoys had put this on, whether to make a suitably dignified impression or out of fear – thinking that it would make the Tarentines respect them. But in fact groups of revellers jeered at them.” One of these revellers, he goes on, even went so far as “to bend down and shit” all over the offending garment. If true, this may also have contributed to the Roman outrage. Yet it is the laughter that Postumius emphasized in his menacing, and prophetic, reply. “Laugh, laugh while you can. For you’ll be weeping a long time when you wash this garment clean with your blood.”

Despite the menace, this story has an immediate appeal. It offers a rare glimpse of how the pompous, toga-clad Romans could appear to their fellow inhabitants of the ancient Mediterranean; and a rare confirmation that the billowing, cumbersome wrap-around toga could look as comic to the Greeks of South Italy as it does to us. But at the same time the story combines some of the key ingredients of ancient laughter: power, ethnicity and the nagging sense that those who mocked their enemies would soon find themselves laughed at. It was, in fact, a firm rule of ancient “gelastics” – to borrow a term (from the Greek gelan, to laugh) from Stephen Halliwell’s weighty new study of Greek laughter – that the joker was never far from being the butt of his own jokes. The Latin adjective ridiculus, for example, referred both to something that was laughable (“ridiculous” in our sense) and to something or someone who actively made people laugh.

Laughter was always a favourite device of ancient monarchs and tyrants, as well as being a weapon used against them. The good king, of course, knew how to take a joke. The tolerance of the Emperor Augustus in the face of quips and banter of all sorts was still being celebrated four centuries after his death. One of the most famous one-liners of the ancient world, with an afterlife that stretches into the twentieth century (it gets retold, with a different cast of characters but the same punchline, both in Freud and in Iris Murdoch’s The Sea, The Sea), was a joking insinuation about Augustus’ paternity. Spotting, so the story goes, a man from the provinces who looked much like himself, the Emperor asked if the man’s mother had ever worked in the palace. “No”, came the reply, “but my father did.” Augustus wisely did no more than grin and bear it.

Tyrants, by contrast, did not take kindly to jokes at their own expense, even if they enjoyed laughing at their subjects. Sulla, the murderous dictator of the first century BC, was a well-known philogelos (“laughter-lover”), while schoolboy practical jokes were among the techniques of humiliation employed by the despot Elagabalus. He is said to have had fun, for example, seating his dinner guests on inflatable cushions, and then seeing them disappear under the table as the air was gradually let out. But the defining mark of ancient autocrats (and a sign of power gone – hilariously – mad) was their attempt to control laughter. Some tried to ban it (as Caligula did, as part of the public mourning on the death of his sister). Others imposed it on their unfortunate subordinates at the most inappropriate moments. Caligula, again, had a knack for turning this into exquisite torture: he is said to have forced an old man to watch the execution of his son one morning and, that evening, to have invited the man to dinner and insisted that he laugh and joke. Why, asks the philosopher Seneca, did the victim go along with all this? Answer: he had another son.

Ethnicity, too, was good for a laugh, as the story of the Tarentines and the toga shows. Plenty more examples can be found in the only joke book to have survived from the ancient world. Known as the Philogelos, this is a composite collection of 260 or so gags in Greek probably put together in the fourth century ad but including – as such collections often do – some that go back many years earlier. It is a moot point whether the Philogelos offers a window onto the world of ancient popular laughter (the kind of book you took to the barber’s shop, as one antiquarian Byzantine commentary has been taken to imply), or whether it is, more likely, an encyclopedic compilation by some late imperial academic. Either way, here we find jokes about doctors, men with bad breath, eunuchs, barbers, men with hernias, bald men, shady fortune-tellers, and more of the colourful (mostly male) characters of ancient life.

Pride of place in the Philogelos goes to the “egg-heads”, who are the subject of almost half the jokes for their literal-minded scholasticism (“An egg-head doctor was seeing a patient. ‘Doctor’, he said, ‘when I get up in the morning I feel dizzy for 20 minutes.’ ‘Get up 20 minutes later, then’”). After the “egg-heads”, various ethnic jokes come a close second. In a series of gags reminiscent of modern Irish or Polish jokes, the residents of three Greek towns – Abdera, Kyme and Sidon – are ridiculed for their “how many Abderites does it take to change a light bulb?” style of stupidity. Why these three places in particular, we have no idea. But their inhabitants are portrayed as being as literal-minded as the egg-heads, and even more obtuse. “An Abderite saw a eunuch talking to a woman and asked if she was his wife. When he replied that eunuchs can’t have wives, the Abderite asked, ‘So is she your daughter then?’” And there are many others on predictably similar lines.

The most puzzling aspect of the jokes in the Philogelos is the fact that so many of them still seem vaguely funny. Across two millennia, their hit-rate for raising a smile is better than that of most modern joke books. And unlike the impenetrably obscure cartoons in nineteenth-century editions of Punch, these seem to speak our own comic language. In fact, the stand-up comedian Jim Bowen has recently managed to get a good laugh out of twenty-first-century audiences with a show entirely based on jokes from the Philogelos (including one he claims – a little generously – to be a direct ancestor of Monty Python’s Dead Parrot sketch).

Why do they seem so modern? In the case of Jim Bowen’s performance, careful translation and selection has something to do with it (I doubt that contemporary audiences would split their sides at the one about the crucified athlete who looked as if he was flying instead of running). There is also very little background knowledge required to see the point of these stories, in contrast to the precisely topical references that underlie so many Punch cartoons. Not to mention the fact that some of Bowen's audience are no doubt laughing at the sheer incongruity of listening to a modern comic telling 2,000-year-old gags, good or bad.

But there is more to it than that. It is not, I suspect, much to do with supposedly “universal” topics of humour (though death and mistaken identity bulked large then as now). It is more a question of a direct legacy from the ancient world to our own, modern, traditions of laughter. Anyone who has been a parent, or has watched parents with their children, will know that human beings learn how to laugh, and what to laugh at (clowns OK, the disabled not). On a grander scale, it is – in large part at least – from the Renaissance tradition of joking that modern Western culture itself has learned how to laugh at “jokes”; and that tradition looked straight back to antiquity. One of the favourite gags in Renaissance joke books was the “No-but-my-father-did” quip about paternity, while the legendary Cambridge classicist Richard Porson is supposed to have claimed that most of the jokes in the famous eighteenth-century joke book Joe Miller’s Jests could be traced back to the Philogelos. We can still laugh at these ancient jokes, in other words, because it is from them that we have learned what “laughing at jokes” is.

This is not to say, of course, that all the coordinates of ancient laughter map directly onto our own. Far from it. Even in the Philogelos a few of the jokes remain totally baffling (though perhaps they are just bad jokes). But, more generally, Greeks and Romans could laugh at different things (the blind, for example – though rarely, unlike us, the deaf); and they could laugh, and provoke laughter, on different occasions to gain different ends. Ridicule was a standard weapon in the ancient courtroom, as it is only rarely in our own. Cicero, antiquity’s greatest orator, was also by repute its greatest joker; far too funny for his own good, some sober citizens thought.

There are some particular puzzles, too, ancient comedy foremost among them. There may be little doubt that the Athenian audience laughed heartily at the plays of Aristophanes, as we can still. But very few modern readers have been able to find much to laugh at in the hugely successful comedies of the fourth-century dramatist Menander, formulaic and moralizing as they were. Are we missing the joke? Or were they simply not funny in that laugh-out-loud sense? Discussing the plays in Greek Laughter, Halliwell offers a possible solution. Conceding that “Menandrian humour, in the broadest sense of the term, is resistant to confident diagnosis” (that is, we don’t know if, or how, it is funny), he neatly turns the problem on its head. They are not intended to raise laughs; rather “they are actually in part about laughter”. Their complicated “comic” plots, and the contrasts set up within them between characters we might want to laugh at and those we want to laugh with, must prompt the audience or reader to reflect on the very conditions that make laughter possible or impossible, socially acceptable or unacceptable. For Halliwell, in other words, Menander’s “comedy” functions as a dramatic essay on the fundamental principles of Greek gelastics.

On other occasions, it is not always immediately clear how or why the ancients ranked things as they did, on the scale between faintly amusing and very funny indeed. Halliwell mentions in passing a series of anecdotes that tell of famous characters from antiquity who laughed so much that they died. Zeuxis, the famous fourth-century Greek painter, is one. He collapsed, it is said, after looking at his own painting of an elderly woman. The philosopher Chrysippus and the dramatist Polemon, a contemporary of Menander, are others. Both of these were finished off, as a similar story in each case relates, after they had seen an ass eating some figs that had been prepared for their own meal. They told their servants to give the animal some wine as well – and died laughing at the sight.

The conceit of death by laughter is a curious one and not restricted to the ancient world. Anthony Trollope, for example, is reputed to have “corpsed” during a reading of F. Anstey’s comic novel Vice Versa. But what was it about these particular sights (or Vice Versa, for that matter) that proved so devastatingly funny? In the case of Zeuxis, it is not hard to detect a well-known strain of ancient misogyny. In the other cases, it is presumably the confusion of categories between animal and human that produces the laughter – as we can see in other such stories from antiquity.

For a similar confusion underlies the story of one determined Roman agelast (“non-laugher”), the elder Marcus Crassus, who is reputed to have cracked up just once in his lifetime. It was after he had seen a donkey eating thistles. “Thistles are like lettuce to the lips of a donkey”, he mused (quoting a well-known ancient proverb) – and laughed. There is something reminiscent here of the laughter provoked by the old-fashioned chimpanzees’ tea parties, once hosted by traditional zoos (and enjoyed for generations, until they fell victim to modern squeamishness about animal performance and display). Ancient laughter, too, it seems, operated on the boundaries between human and other species. Highlighting the attempts at boundary crossing, it both challenged and reaffirmed the division between man and animal.

Halliwell insists that one distinguishing feature of ancient gelastic culture is the central role of laughter in a wide range of ancient philosophical, cultural and literary theory. In the ancient academy, unlike the modern, philosophers and theorists were expected to have a view about laughter, its function and meaning. This is Halliwell’s primary interest.

His book offers a wide survey of Greek laughter from Homer to the early Christians (an increasingly gloomy crowd, capable of seeing laughter as the work of the Devil), and the introduction is quite the best brief overview of the role of laughter in any historical period that I have ever read. But Greek Laughter is not really intended for those who want to discover what the Greeks found funny or laughed at. There is, significantly, no discussion of the Philogelos and no entry for “jokes” in the index. The main focus is on laughter as it appears within, and is explored by, Greek literary and philosophical texts.

In those terms, some of his discussions are brilliant. He gives a clear and cautious account of the views of Aristotle – a useful antidote to some of the wilder attempts to fill the gap caused by the notorious loss of Aristotle’s treatise on comedy. But the highlight is his discussion of Democritus, the fifth-century philosopher and atomist, also renowned as antiquity’s most inveterate laugher. An eighteenth-century painting of this “laughing philosopher” decorates the front cover of Greek Laughter. Here Democritus adopts a wide grin, while pointing his bony finger at the viewer. It is a slightly unnerving combination of jollity and threat.

The most revealing ancient discussion of Democritus’ laughing habit is found in an epistolary novel of Roman date, included among the so-called Letters of Hippocrates – a collection ascribed to the legendary founding father of Greek medicine, but in fact written centuries after his death. The fictional exchanges in this novel tell the story of Hippocrates’ encounter with Democritus. In the philosopher’s home city, his compatriots had become concerned at the way he laughed at everything he came across (from funerals to political success) and concluded that he must be mad. So they summoned the most famous doctor in the world to cure him. When Hippocrates arrived, however, he soon discovered that Democritus was saner than his fellow citizens. For he alone had recognized the absurdity of human existence, and was therefore entirely justified in laughing at it.

Under Halliwell’s detailed scrutiny, this epistolary novel turns out to be much more than a stereotypical tale of misapprehension righted, or of a madman revealed to be sane. How far, he asks, should we see the story of Democritus as a Greek equivalent of the kind of “existential absurdity” now more familiar from Samuel Beckett or Albert Camus? Again, as with his analysis of Menander, he argues that the text raises fundamental questions about laughter. The debates staged between Hippocrates and Democritus amount to a series of reflections on just how far a completely absurdist position is possible to sustain. Democritus’ fellow citizens take him to be laughing at literally everything; and, more philosophically, Hippocrates wonders at one point whether his patient has glimpsed (as Halliwell puts it) “a cosmic absurdity at the heart of infinity”. Yet, in the end, that is not the position that Democritus adopts. For he regards as “exempt from mockery” the position of the sage, who is able to perceive the general absurdity of the world. Democritus does not, in other words, laugh at himself, or at his own theorizing.

What Halliwell does not stress, however, is that Democritus’ home city is none other than Abdera – the town in Thrace whose people were the butt of so many jokes in the Philogelos. Indeed, in a footnote, he briefly dismisses the idea “that Democritean laughter itself spawned the proverbial stupidity of the Abderites”. But those interested in the practice as much as the theory of ancient laughter will surely not dismiss the connection so quickly. For it was not just a question of a “laughing philosopher” or of dumb citizens who didn’t know what a eunuch was. Cicero, too, could use the name of the town as shorthand for a topsy-turvy mess: “It’s all Abdera here”, he writes of Rome. Whatever the original reason, by the first century BC, “Abdera” (like modern Tunbridge Wells, perhaps, though with rather different associations) had become one of those names that could be guaranteed to get the ancients laughing.



Stephen Halliwell
GREEK LAUGHTER
A study of cultural psychology from Homer to early Christianity
632pp. Cambridge University Press. £70 (paperback, £32.50). US $140 (paperback, $65).
978 0 521 88900 1



Mary Beard is the author of The Roman Triumph published in 2007 and Pompeii: The life of a Roman town, 2008. She is Classics editor of the TLS.