Paragrapher Free Article

— %100 Original Causerie —

Epic Struggles “Prince of Persia” and “John Rabe.”

May 29, 2010 Posted by | Cinema | , , , , , , , | 1 Comment

The Old Way

March Cover

“What do ye do when ye see a whale, men?”
“Sing out for him!” was the impulsive rejoinder from a score of clubbed voices.
“Good!” cried Ahab, with a wild approval in his tones; observing the hearty animation into which his unexpected question had so magnetically thrown them.
“And what do ye next, men?”
“Lower away, and after him!”
“And what tune is it ye pull to, men?”
“A dead whale or a stove boat!”

The call-and-response of Ahab’s maniacal pep rally—a string of, as Ishmael puts it, “seemingly purposeless questions” with which the Pequod’s captain stirs his crew into a bloodthirsty furor for whale-killing—culminates in what one scholar of American folklore has called the “universal motto” of nineteenth-century whalemen: “A dead whale or a stove boat!” Like a seagoing version of the Depression-era bumper slogan “California or bust,” the phrase pithily evokes both the mariners’ desperate dedication to the pursuit and destruction of their prey and the extreme risks they incurred in the process. “A dead whale” was, of course, the desired outcome of the chase, but “a stove boat”—a wrecked mess of splintered timber, fouled tackle, and flailing bodies—was just as likely. For the fictional crew of the Pequod, as for the real whalemen of the day, whaling was more mortal combat than straightforward hunt: Six sailors in a flimsy, open whaleboat, armed with only handheld harpoons and lances, pitting themselves at every opportunity against the singular terror of a true sea monster, the sperm whale, an animal that, when fully grown, could measure sixty-two feet in length, weigh eighty tons, and wield, to deadly purpose, an eighteen-foot jaw studded with seven-inch teeth.

Into the Deep: America, Whaling & the World, a new American Experience documentary by Ric Burns, is alive with the all-or-nothing ethos of the nineteenth-century whaleman. Drawing its central narrative arc from two of the most famous man-versus-whale tales of the era—the true, though at the time unthinkable, story of the Essex, a whaleship sunk in the middle of the Pacific by an enraged sperm whale, and the dark masterpiece it partially inspired, Herman Melville’s Moby-Dick—the film follows the history of the American trade as it evolved from the colonial practice of “drift whaling” through the so-called Golden Age, which lasted from shortly after the War of 1812 until the commercialization of petroleum after it was successfully drilled in 1859. During that time, Nantucket, New Bedford, and other port towns sent hundreds of ships all over the globe in search of leviathans. This was before modern whaling technologies reduced the drama and heroics of the chase to mere assembly-line slaughter, when whaling still represented, in the words of several scholars interviewed in the film, a “primordial . . . epic hunt, . . . tap[ping] into something very basic about human existence and experience,” “a spiritual endeavor,” and a “peculiar combination of romance, . . . danger, and exoticism.” Those brave enough to ship out on a Yankee whaler could expect to hunt the biggest game, explore new corners of the ocean and faraway lands, dally with foreign women, and hack to pieces and boil down behemoth carcasses.

Wait. Hack and boil carcasses?

For all the antiquarian nostalgia that risks tinting our view of the fishery’s past, Into the Deep never loses sight of the simple fact that whaling was an industry—one of the largest, most profitable, and important businesses of its day, involving tens of thousands of workers at sea and on shore, and millions of dollars in annual investments and returns. It is a refreshingly clear perspective for those of us who may have thumbed quickly past the more technical chapters of Moby-Dick, or who imagine whaling through the narrow lens of those impressive painted and scrimshawed scenes of vicious whales smashing boats and tossing sailors in the air. Men went to sea for any number of reasons—to make a living, to escape the law, to find themselves—but once aboard a whaleship, their job was to supply the rapidly industrializing Western world with oil for its lamps, candles, and machinery, and baleen for its parasol ribs, horsewhips, and corsets. And as author Nathaniel Philbrick, one of the experts appearing in the film, said in a phone interview: “It’s not as though the harpoon hit the whale and—poof—magically it was turned into a profitable commodity.” To effect that transformation required some of the most difficult and disgusting labor of any industry of the time.

“We have to work like horses and live like pigs,” wrote Robert Weir, a greenhand (or first-time sailor), in his diary. His experiences aboard the whaleship Clara Bell from 1855 to 1858 correspond to many scenes from Into the Deep. After only forty-eight hours at sea, his “eyes,” he said, were already “beginning to open” to the harsh realities of his “rather dearly bought independence.” He had shipped out to cut ties with those on land—his family and creditors—but to what end? The life of a whaleman was not, it turned out, all battling leviathans, exploring exotic isles, and cavorting with natives. In fact, for the most part, it was downright miserable. The quarters were cramped, the food was awful, and the work, when there was any to be done, positively backbreaking. After one especially long day, Weir jotted in his diary, it “rained pretty hard in the evening—and I got wet and tired tending the rigging and sails. Tumbled into my bunk with exhausted body and blistered hands.” To this account he appended a one-word commentary, as bitterly sarcastic as it was short: “Romantic.”

Although wooden whalers required, as Weir put it, “innumerable jobs” just to keep afloat and moving forward, the really hard work of whaling didn’t begin until after the brief thrills of the chase were brought to a successful conclusion. If a whaleboat crew were skilled and lucky enough to kill a whale—to make it spout blood and roll “fin out,” in the colorful language of the fishery—the men would then have to tow the carcass to the waiting mother ship, which could be anywhere from a few yards to several miles distant. As Mary K. Bercaw Edwards, a professor of maritime literature at Williams College–Mystic Seaport Program, points out in the film, dragging tens of tons of deadweight through the water under oar was anything but easy: Six men working themselves raw could only achieve a top speed of one mile per hour. Even a mariner seasoned by years in the merchant service described towing a dead whale as “one of the most tedious and straining undertakings I have ever assisted at.”

And, as some of the archival photographs and footage Burns dredged up for Into the Deep graphically attest, things didn’t get any easier after the whaleboat met the ship. Brought alongside, the corpse was secured to the starboard side of the vessel, whale’s head to ship’s stern, by a large chain about its flukes and sometimes a wooden beam run through a hole cut into its head. Soon, all hands—except, in American whalers, the captain—were given over to the bloody task of “cutting-in,” by which the whale was literally peeled of its blubber—“as an orange is sometimes stripped by spiralizing it” is the simile Melville and other salts and scholars have used to illuminate the process. With a few deft slashes of a fifteen-foot cutting spade, an experienced mate would loosen a portion of flesh and blubber between the animal’s eye and fin, while another man, braving the sharks that were by now swarming the grisly mass, boarded the body, and fixed a huge hook to the cut swath of whale. Drawn up into the rigging, this hook began ripping a long strip of blubber, called a “blanket-piece,” from the carcass. Measuring some five feet wide, fifteen feet long, and ten to twenty inches thick, blanket-pieces were borne aloft and aboard, where they could be cut down to sizes suitable for “trying-out,” the next step.

Gruesome as cutting-in may seem to most of us, unaccustomed as we are to the scenes that unfold daily in slaughterhouses and aboard commercial fishing vessels, it was really nothing more than whale-scale butchery—certainly not the kind of thing any hunter, especially one who had just gone through all the trouble and gore of killing a whale, would cringe at. But trying-out, the process of boiling oil from the stripped blubber, was another story. Working around the clock in six-hour shifts for one to three days (depending on the size of the whale killed), the crew kept the two giant copper cauldrons of the try-works burning, tossing in hunks of blubber and barreling the gallons and gallons of oil they rendered. Almost every whaling memoir contains some stomach-turning account of this process. Melville’s highly poetic version is quoted in the film, but Charles Nordhoff’s 1856 Whaling and Fishing, with which the author aimed, he said, “to give a plain common sense picture of that about which a false romance throws many charms,” offers one of the most visceral litanies of the distasteful conditions trying-out created aboard ship. “Everything,” the seaman wrote, “is drenched with oil. Shirts and trowsers are dripping with the loathsome stuff. The pores of the skin seem to be filled with it. Feet, hands and hair, all are full. The biscuit you eat glistens with oil, and tastes as though just out of the blubber room. The knife with which you cut your meat leaves upon the morsel, which nearly chokes you as you reluctantly swallow it, plain traces of the abominable blubber. Every few minutes it becomes necessary to work at something on the lee side of the vessel, and while there you are compelled to breath in the fetid smoke of the scrap fires, until you feel as though filth had struck into your blood, and suffused every vein in your body. From this smell and taste of blubber, raw, boiling and burning, there is no relief or place of refuge.”

whaling

And there was more. To quote Melville: “It should not have been omitted that previous to completely stripping the body of the leviathan, he was beheaded.” As the blanket-pieces were rent from the dead whale, its body turned in the water, straining against the fixed head, until, with some more plying of a spade, the two portions were wrenched apart. If the head was of a manageable size, it was brought on deck; if not, it was rigged to the side of the ship, nose down. Right, bowhead, and fin whales were relieved of their baleen, while sperm whales had the spermaceti, a substance contained in a head organ known as the case, bailed out in bucketfuls. “This is the good stuff,” says Philbrick in the film. “It’s as clear as vodka when you first open” the spermaceti organ, “but as soon as it touches air, it begins to oxidize,” taking on the white, waxy properties that caused early whalemen to mistake it for the animal’s semen. Scientists still don’t know what function spermaceti serves in whale physiology, but for the men and women of the nineteenth century, it was simply the best illuminant and lubricant money could buy. In fact, the light given off by candles manufactured with spermaceti was considered so superior to that of other types of candles that it served as the benchmark for all artificial light: One candlepower, as defined by the English Metropolitan Gas Act of 1860, was equivalent to the light of a pure spermaceti candle of one-sixth pound burning at a rate of one hundred and twenty grains per hour. The spermaceti-based unit survived until an international committee of standards agencies redefined the measure in 1909 to conform with the luminous properties of the then recently invented electric carbon filament bulb.

Finally, with all the blubber processed, all the spermaceti bailed, and the decapitated corpse left for the sharks and scavenging birds, the crew set about giving the ship a thorough scouring. This was accomplished with a combination of strong alkali and sand, or sometimes an effective concoction of human urine and whale blubber ash. Only when the ship was returned to its pre-processing shine—“with a sort of smug holiday look about her,” wrote one sailor—did the men even attempt to clean themselves. “Happy day it was for me,” remarked Nordhoff, “when I was once more permitted to put on clean clothes, and could eat biscuit without oil, and meat unaccompanied by the taste of blubber.” A well-earned respite to be sure, but, of course, only temporary: The entire laborious, nauseating operation, from chasing down to trying-out to cleaning up, would be repeated perhaps as many as one hundred and fifty times until, if the cruise was a “greasy” one—the whalemen’s esoteric but wholly appropriate word for “good,” “fortunate,” or “lucky”—the hold practically overflowed with whale oil, spermaceti, and baleen. A prospect that, one imagines, might have caused more than a few greenhands to hesitate for a moment before yelling, “There she blows!” at their next glimpse of a whale.

And yet, for all the hardships involved, men shipped with Yankee whalers in droves throughout the Golden Age. The experience of whaling was, it seems, something irreducible to the sum of its working parts. “At the end of the day,” Burns says, whaling in the nineteenth century was still “an extraordinarily primal, existential confrontation between human beings and what was really the last frontier of untamed nature, the oceans of the world.”

Indeed, Melville, Weir, Nordhoff, and countless other whalemen of the time didn’t just “work like horses and live like pigs”; they had adventures, too. They took on and dispatched the largest animals on the planet, lived as captives among cannibals, saw islands no one had ever seen before, plumbed the depths of their souls and psyches while scanning the ocean from the masthead.

“At some point,” Burns says, “one wants to see whaling for what it was and understand the crucial admixture of cruelty, and greed, and nobility, and courage, and generosity, and selfishness, and withal the magnificence of the enterprise, even as one says, ‘Thank God it’s gone. Thank God we’re not out there on three-hundred-ton ships prowling the world, looking for mammals to turn into umbrella stays, lamp oil, and lubricant.’”

May 29, 2010 Posted by | Off Topig | , , , , , , , , , , , , , | Leave a comment

How English erased its roots to become the global tongue of the 21st century

How English erased its roots to become the global tongue of the 21st century

‘Throw away your dictionaries!’ is the battle cry as a simplified global hybrid of English conquers cultures and continents. In this extract from his new book, Globish, Robert McCrum tells the story of a linguistic phenomenon – and its links to big money

Globalisation is a word that first slipped into its current usage during the 1960s; and the globalisation of English, and English literature, law, money and values, is the cultural revolution of my generation. Combined with the biggest IT innovations since Gutenberg, it continues to inspire the most comprehensive transformation of our society in 500, even 1,000, years. This is a story I have followed, and contributed to, in a modest way, ever since I wrote the BBC and PBS television series The Story of English, with William Cran and Robert MacNeil, in the early 1980s. When Bill Gates was still an obscure Seattle software nerd, and the latest cool invention to transform international telephone lines was the fax, we believed we were providing a snapshot of the English language at the peak of its power and influence, a reflection of the Anglo-American hegemony. Naturally, we saw our efforts as ephemeral. Language and culture, we knew, are in flux. Any attempts to pin them down would be antiquarianism at best, doomed at worst. Besides, some of the experts we talked to believed that English, like Latin before it, was already showing signs of breaking up into mutually unintelligible variants. The Story of English might turn out to be a last hurrah.

We were, of course, dead wrong. The global power and influence of Anglo-American language and culture in the broadest sense were about to hit another new high. When the cold war ended, after the Berlin Wall came down, and once the internet took off in the 1990s, there was an astonishing new landscape to explore and describe. Sometimes during these years the spread of Anglo-American culture seemed like the fulfilment of the ambition expressed by America’s founding fathers to play a role “among the Powers of the Earth” derived, as they put it, from “the Laws of Nature”. The world had become a planet composed of some 193 countries, all enjoying a greater or lesser familiarity with English and Englishness. Was this the end of Babel?

A hundred years ago, one description of this phenomenon might have been “Anglo- or America-philia”, but that will not wash today. Anglo-American culture has so many contemporary faces. It can conjure up elderly gentlemen of Germanic demeanour in brogues and tweed jackets, or a certain kind of American Wasp taking tea and crumpets in Fortnum & Mason. Or it can be found in the angry banlieues of Paris where, echoing the universal tongues of rap music and football, many of the kids are called “Steeve”, “Marky”, “Britney” or even “Kevin”. Again, it can convey the enthusiasm for English of, for example, DJ Static (aka Mike Lai), a Montreal rapper who came to Canada from Hong Kong as a boy of 11 and learned English by repeating the lyrics of hip-hop songs. Or it can describe the excitement of English-language students in Japan who, in the spring of 2009, were filmed by the BBC solemnly repeating extracts from the speeches of Barack Obama as part of their training.

There is also the demotic energy of English in, for instance, contemporary Los Angeles, which is both the multicultural capital of Hispanic California and simultaneously the headquarters of a global movie business, the American dream factory. Cross the Pacific and the perspective changes again. There, you will find Nury Vittachi, aka “Mister Jam”, a journalist and novelist of Australian descent now based in Hong Kong, who describes the lingua franca of the Far East as “Englasian” – a m ostly English vocabulary set into Chinese and Hindi syntax. “Throw away your dictionaries,” writes Vittachi. “The unwritten language Englasian really is threatening to supplant English as the business language of Asia.” Still others speak of “Panglish”, the global tongue.

At the dawn of a new millennium the phenomenon of English seems more vivid and universal than ever before. Like a Jackson Pollock of language, countless new variants are adding to the amazing Technicolor texture of the overall picture: urban patois like “Jafaikan”; or local Asian hybrids like Konglish (English in South Korea) and Manglish (Malay and English); or contemporary slang like “cheddar”, “phat” and “noob”, for “money”, “wonderful/great” and ‘somebody new/ignorant”. Yet, even at its zenith, this has been a fleeting moment, with a tragic reckoning: the 9/11 attack on the twin towers, the Iraq war and the polarisation of the global community in the “war on terror”. Overnight, the benign sensations of the 1990s were replaced by something much more chill, and menacing. Since 2001 the achievements of what might be seen as the American century have been swiftly obliterated.

During the presidency of George W Bush American language and culture became associated with unilateral and often irrational policies of a wounded superpower, acts of aggression, masquerading as self-defence and motivated by rage, insecurity and fear. In former times this phase might have resulted in a retreat from the dominant language and culture of the moment. But this did not happen – for two main reasons. First, in 2008, after almost a decade of angry chauvinism, American democracy seemed to rediscover its purpose and elected Barack Obama. Secondly, so-called “soft power” has its own trajectory; there was always an important distinction to be drawn between culture and foreign policy. Young Iranians could hate George W Bush but idolise American pop stars, burn the stars and stripes but splash out on American-style jeans and computers.

Moreover, English had developed a supranational momentum that gave it a life independent of its British, and more especially its American, roots. Already multinational in expression, English was becoming a global phenomenon with a fierce, inner multinational dynamic, an emerging lingua franca described by the historian Benedict Anderson as “a kind of global-hegemonic post-clerical Latin”.

Today there is almost no limit to the scope of this subject. The world’s varieties of English range from the “crazy English” taught to the Chinese-speaking officials of the Beijing Olympics, to the “voice and accent” manuals issued by Infosys and Microsoft at their Bangalore headquarters. Thus, English today embodies a paradox. To some, it seems to carry the seeds of its own decay. In the heartlands of the mother tongue, there are numerous anxieties about its future: in the United States, language conservatives agonise about the Hispanic threat to American English. But simultaneously, and more stealthily – almost unnoticed, in fact – the real challenge to the English of Shakespeare and the King James Bible comes less from alien speech than from the ceaseless amendments made to English in a myriad daily transactions across the known world. Here, global English, floating free from its troubled British and American past, has begun to take on a life of its own. My prediction is that the 21st-century expression of British and American English – the world’s English – is about to make its own declaration of independence from the linguistic past, in both syntax and vocabulary.

In The Prodigal Tongue, “dispatches from the future of English”, Mark Abley has a telling passage about the “Latvians and Macedonians, Indonesians and Peruvians, Israelis and Egyptians” who sign up for the official online forum of the rock group Coldplay. To these fans, writes Abley, it doesn’t matter that the band consists of three Englishmen and a Scot singing in a tongue that was once confined to part of an island off Europe’s coast. Now, wherever on the planet these fans happen to live, music connects them. So does language. As long as they’re willing to grope for words in the accelerating global language that Coldplay speaks, the forum gives all its members a chance to speak.

This is the interactive, ever-changing world of global English. At the beginning of the 21st century, rarely has a language and its culture enjoyed such an opportunity to represent the world. In crude numbers alone, English is used, in some form, by approximately 4 billion people, one-third of the planet, and outnumbered only by the speakers of Chinese, approximately 350 million of whom also speak some kind of English.

This has many expressions. In an offshoot of “crazy English”, Harry Potter and the Deathly Hallows was pirated in several Chinese versions, blending storylines lifted from Tolkien and kung-fu epics, with titles like Harry Potter and the Chinese Empire, and Harry Potter and Leopard Walk up to Dragon. At the same time, JK Rowling’s triumphant coda to her series would also be launched, in English, from Reykjavik to Quito. This is what happened on the night of 21 July 2007 when a global fraternity of juvenile wizards mobbed bookstore checkouts across the world, with an immediate sale (3.5 million copies in the first week) of the English-language edition. In Germany, the Guardian reported, “muggle”, “quidditch” and “house elf” were becoming “part of German schoolchildren’s vocabulary”.

The world’s appetite for English language and culture means that the Royal Shakespeare Company will tour its “Complete Shakespeare” productions worldwide, Manchester United will plan its matches to suit Japanese television schedules and the House of Lords will rule on the use of torture in the “war on terror” using arguments whose roots lie in the debates surrounding Magna Carta. The same pressures mean that, in 2006–07, about 80% of the world’s home pages on the world wide web were “in some kind of English”, compared to German (4.5%) and Japanese (3.1%), while Microsoft publishes no fewer than 18 versions of its “English language” spellcheckers.

The India of Hobson-Jobson has also found a new global audience. A film such as Mira Nair’s Monsoon Wedding is typical of the world’s new English culture. The Indian bridegroom has a job in Houston. The wedding guests jet in from Melbourne and Dubai and speak in a mishmash of English and Hindi. Writing in the Sunday Times, Dominic Rushe noted that Bollywood English is “hard to reproduce in print, but feels something like this: “Yudhamanyus ca vikranta uttanaujas ca viryanavan: he lives life in the fast lane.” Every English-speaking visitor to India watches with fascination the facility with which contemporary Indians switch from Hindi or Gujarati into English, and then back into a mother tongue. In 2009, the film Slumdog Millionaire took this a stage further. Simon Beaufoy’s script, a potpourri of languages, adapted from an Indian novel, was shot in Mumbai, with a British and Indian cast, by Scottish director Danny Boyle, but launched worldwide with an eye on Hollywood’s Oscars, where it eventually cleaned up.

India illustrates the interplay of British colonialism and a booming multinational economy. Take, for instance, the 2006 Man Booker prize. First, the result was broadcast on the BBC World Service from Delhi to Vancouver. The winner was The Inheritance of Loss by Kiran Desai, an Indian-born writer who had attended writing classes in New York. So far removed from any English experience, though steeped in its literary tradition, was The Inheritance of Loss that, finally, the British critic John Sutherland was moved to describe Desai’s work as “a globalised novel for a globalised world”. The writer herself is emblematic of the world’s new culture: educated in Britain and America, she wrote her novel in her mother Anita Desai’s house in the foothills of the Himalayas, and boasts on her website of feeling “no alienation or dislocation” in her transmigration between three continents.

The Inheritance of Loss is the literary representation of a contemporary experience. Desai says that her book “tries to capture what it means to live between east and west, and what it means to be an immigrant”; it also explores “what happens when a western element is introduced to a country that is not of the west”. She also asks: “How does the imbalance between these two worlds change a person’s thinking and feeling? How do these changes manifest themselves in a personal sphere over time?” Or, she might have added, in a linguistic and cultural sphere.

By 2010 Britain’s role in the world, no longer colonial, is to participate in the international rendering of English and its culture. Like an elderly relative at a teenage rave, the UK sponsors the consumption of English as a highly desirable social and cultural force, “the worldwide dialect of the third millennium”.

Those are the words of Jean-Paul Nerrière, a French-speaking former IBM executive and amateur linguistic scholar. In 1995, Nerrière, who had noticed that non-native English-speakers in the Far East communicated more successfully in English with their Korean and Japanese clients than competing British or American executives, formulated the idea of “decaffeinated English” and, in a moment of inspiration, christened it “Globish”. His idea quickly caught on. In The Last Word, his dispatches from the frontline of language change, journalist Ben Macintyre writes: “I was recently waiting for a flight in Delhi, when I overheard a conversation between a Spanish UN peacekeeper and an Indian soldier. The Indian spoke no Spanish; the Spaniard spoke no Punjabi. Yet they understood one another easily. The language they spoke was a highly simplified form of English, without grammar or structure, but perfectly comprehensible, to them and to me. Only now do I realise that they were speaking “Globish”, the newest and most widely spoken language in the world.”

For Nerrière, Globish starts from a utilitarian vocabulary of some 1,500 words, is designed for use by non-native speakers, and is currently popularised in two (French-language) handbooks, Découvrez le Globish and Parlez Globish. As a concept, “Globish” is now quite widely recognised across the European Union, and is often referred to by Europeans who use English in their everyday interactions.

In 2007, having read about Jean-Paul Nerrière in the International Herald Tribune , I interviewed him in Paris. He turned out to be a delightful Frenchman, with quixotic ambitions not only for global fraternity but also for the preservation of the French language. “Globish”, he told me over a steak frites in a little restaurant opposite the Gare du Nord, “will limit the influence of the English language dramatically.” As I returned to London, I reflected that “Globish” was more than just a new word for a dialect or an international communication tool. It was a description of a lingua franca, but with a difference. On further consideration, it was also a metaphor for the novelty of global English culture today.

As we enter the second decade of the new century, we are witnessing, in Globish, a contemporary phenomenon of extraordinary range and complexity, expressing a new world of global interconnections. When I was completing the first draft of my book in October 2008, I would break off from the day’s work to watch the television news. These were momentous hours, as the “credit crunch” swiftly became the “global financial crisis”. Hour after hour, there were reports of falling markets in Japan, Hong Kong, Singapore, Frankfurt, Paris, Milan, London and New York. Across a dozen different time zones, financial journalists in each of these cities filed reports for their national desks, but the language of the crisis was unvaryingly Globish. European finance ministers held urgent press conferences, addressing the world’s media in Globish. The doomed prime minister of Iceland, Geir Haarde, watching his country slide into bankruptcy, stoically maintained an even flow of Globish sentiment to calm the nerves of his people while appealing to an international television audience.

The crisis of Meltdown Monday (and its successive aftershocks) emphasised that Globish is a cultural and media phenomenon, one whose infrastructure is economic as much as cultural. Boom or bust, it is a story of “follow the money”. Globish remains based on trade, advertising and the global market. Traders in Singapore inevitably communicate in local languages at home; internationally they default to Globish. Now the global equation ran as follows: Microsoft plus Dow Jones = Globish. So viral is its ceaseless expression round the world that to separate cause and effect is virtually impossible. With its supranational momentum, above and beyond American and British influence, Globish sustains itself as both chicken and egg. To a world community in economic turmoil, at least it offers a means of sharing remedies and counter-measures.

To be realistic: Globish has become an extraordinary phenomenon, but it has not replaced Babel. Language evolves like the species, slowly. The world, flatter and smaller than ever before, is still distinctive as much for its approximately 5,000 different languages as for its emerging Globish. The big picture is infinitely complex, with native speakers clinging fiercely to their ancient languages. Jacques Chirac,

former president of France, once said that nothing would be more

damaging for humanity than for several thousand languages to be

reduced to one. The Sunday Times

commented that ‘to be born an English-speaker is

to win one of the top prizes in life’s lottery. And this can be said

without a hint of triumphalism, sexism, or racism, without annoying

anybody much except the French.’

Is this anglophone future really secure? And, if so, wWhat might Globish achieve in such an arena? When it comes to the future of any language, the cultural commentator is advised to proceed with caution. The highway of English is littered with the debris of burned-out predictions. I recall, with affection, the former chief editor of the Oxford English Dictionary, Robert Burchfield, a distinguished lexicographer who was always a sparkling refutation of Dr Johnson’s celebrated definition. Burchfield was never “a harmless drudge”. In fact, he made the language news. More accurately, he recognised that what was happening to the mother tongue in the late 20th century was unprecedented, and he used his position at Oxford to publicise the fact.

According to Burchfield, English was like Latin. Just as Latin broke up into mutually unintelligible languages like French, Spanish and Italian, so would global English similarly disintegrate into separate, mutually distinctive tongues. To the delight of leader-writers from Sydney to Saskatchewan, he pointed out that, historically speaking, languages have always had a tendency to break up, or to evolve. There were, he argued, some “powerful models of the severance of a language into two or more constituent parts, especially the emergence of the great Germanic languages of western Europe – English, German, Dutch, Norwegian, Swedish, and so on – from the mutually intelligible dialects of the 5th century. The obvious objection to this model, which his critics were swift to deploy, was the contemporary vigour and interconnectedness of global English. In the age of mass media, the future of world English, said Burchfield’s opponents, would never follow the Latin model. To which he replied that such objections overlooked one vital fact: “English, as the second language of many speakers in countries throughout the world, is no more likely to survive the inevitable political changes of the future than did Latin, once the second language of the governing classes or regions within the Roman Empire.” At the moment when Burchfield made this pronouncement the global reach of English was inextricably bound up with American power, on the Roman model. These were the Reagan years. In this new imperium, the local varieties, for example Asian, Indian and Caribbean English, did seem to illustrate the argument for centrifugal change.

Once the cold war ended, the nature of American power became transformed. Today, with the emergence of Globish, the evolution of the “new Englishes” into separate languages seems increasingly unlikely. The broad river of Globish becomes the beneficiary of these sparkling tributaries. Moreover, the colossal financial underpinning of Globish (many trillions of dollars) must ensure its viability, at least for now.

Culture is about identity. For as long as the peoples of the world wish to express themselves in terms of ideas like “freedom”, “individuality” and “originality”, and for as long as there are generations of the world’s schoolchildren versed in Shakespeare, The Simpsons, the Declaration of Independence and the Bible, Globish will remain the means by which an educated minority of the planet communicates in the quest for a better world.

May 29, 2010 Posted by | Off Topig | , , , , , , , | 1 Comment

French Lessons in Londonistan

MUSLIMS HAVE been landing on the shores of Britain and France for decades. And, as these populations arrived and settled in the Republic, Paris pursued a policy it believed would eventually lead immigrants to full cultural integration into French society. Meanwhile, London, facing a similar influx of foreigners, attempted to create a full-fledged multicultural polity. The former emphasized that what was shared between the new arrivals and their native hosts was crucial, their differences secondary. The latter argued that the British needed to respect the uniqueness of their immigrant neighbors—whether national, religious or ethnic—and that such a stance was at the core of a harmonious political system. In color-blind France, built on a long tradition of a strong, centralized state and the successful assimilation of southern and eastern Europeans—who have been migrating to the country since the nineteenth century—religious identity was not to interfere in public life. Under the French tricolor, state and nation were fused into the cradle of the one and indivisible Republic. In race-aware Britain, with Anglicanism as its established church, there was always room for different nationalities—English, Welsh, Scottish, Irish—under the Union Jack.

The French and British experience as colonizers—and the ways in which those under imperial rule would come to see their occupiers—haunt the place of Muslim immigrants on both sides of the Channel. The Moslems of the British Raj lived as a minority among Hindus and struggled to maintain a separate identity through religious movements like the Deobandis (founded in India in 1867 and ancestors of the present-day Taliban). The political economy of the Raj was based on communalism, with Hindus, Sikhs and Muslims (and Sunni and Shia) fighting against each other. London fanned the embers of religious discord to keep military expenses low and the number of redcoats at a minimum. Divide and conquer.

At the end of the day, the British approach led to the bloody partition of the Raj between India and Pakistan; Karachi was homogeneously Muslim (though sectarian strife would soon rise among Sunni and Shia, and civil war would pit liberals against extremists), New Delhi became multicultural with a caste flavor.

The French colonies were something altogether different. Unlike the Deobandis of India, North and West Africa possessed no similar religious movements that struggled to maintain a separate Islamic identity in the face of a hostile non-Muslim majority. The French policed, at a high cost, every village of Algeria and Senegal, just as the gendarmes did in Provence and Corsica. Thus, France’s immigrants were ignorant of the kind of self-imposed apartheid that could be transported and implemented on French soil. The North and West Africans who migrated to France after World War II came from Muslim-majority countries and felt no need to enhance their religious peculiarities. Bachelors perceived themselves as temporary migrants. Families, most of them coming from Algeria, had no special claim to be religiously different. And after the end of the Algerian War in 1962, immigrants quietly and smoothly acquired the French citizenship to which they were entitled—to the furor of their leaders back home. For the musulmans who comprised a majority of the French colonial empire, the best possible future, according to the dominant French narrative, was to become French one day.

Such a grand récit was, of course, not implemented in colonial days—for the promise of citizenship was part and parcel of a workable imperial dominion. But in the end, as soon as the former colonized set foot on French soil in their new migrant-worker garb, they took Paris at its word, and France paid its colonial debt through a process of cultural and political integration that ran parallel to the process of turning earlier immigrants—Italians, Spaniards, Portuguese, Poles, et al.—into members of the Republic.

No such transformation was possible, however, for those British subjects moving from the peripheries of the empire to its island center. In Britain, one is born English, end of story. When Muslims started to migrate en masse from the former colonies, they became Commonwealth subjects with voting rights, and their “Islamness” turned out to be a kind of nationality of its own, albeit under the umbrella of what would later become British citizenship. Clearly, one could never hope to become English.

America—immigrant nation extraordinaire—is facing its first experience with homegrown Islamist extremism. How the United States conceives of and approaches the threat on its shores will clearly etch out the future of its relationship with its Muslim population in all of its complexity. Washington has much to learn from its European ancestors, who have struggled with, fallen victim to and at times overcome jihadists in their own lands. At its core, this is a question of culture—the approach to “other.”

THE IMPERIAL experience serves as a backdrop to the markedly contrasting ways that London and Paris have approached the immigration dilemma. France has created an intermingled culture, which is being forged on a daily basis between the native Gaul and the immigrant Arab and Berber. It revolves around two French obsessions: the bed and the dinner table. Your average young Muslim girl is interested in living and having children with a French gouer, a North-African colloquial term meaning “infidel”—i.e., non-Muslim. (Gouer is itself a corruption of the classical Arabic kuffar, used in immigrant slang to designate a French native. They are also known as fromage, or “cheese”—ironically the same synecdoche that was used in the neocon-coined “cheese-eating surrender monkeys.”) These women would loathe the very idea of an arranged marriage to a fellah (peasant) cousin from the far away bled (North Africa) with his unrefined manners and pedestrian French. By the same token, the most popular national dish of France—the country of gastronomy par excellence—regularly confirmed by opinion polls, is couscous, the semolina-based traditional dish of North Africa, now fully assimilated by French palates. And even beyond the confines of culture and marriage, what is Catholic France’s holy trinity of the most popular heroes, in survey after survey? The soccer player Zinedine Zidane (of Algerian-Berber descent), tennis player Yannick Noah (of mixed Cameroon-Alsatian descent) and filmmaker Dany Boon (of North-African-Muslim descent), who converted to Judaism at the time of his wedding to his Sephardic wife.

For the most part, this emphasis on integration—though not without its faults—has worked pretty well in France. Western Europe’s biggest “Muslim country” (the current numbers hover around 6 million people) has not seen a successful terrorist attack on its territory since 1996. All plots were uncovered; their perpetrators jailed or deported. An efficient intelligence service, well trained in Arabic and Muslim politics, played an important role, and special legal rules—such as the ability to keep terror suspects in custody—allowed for great ad hoc efficiency.1 This successful counterterrorism policy could never have worked without the cultural acquiescence of the vast majority of French citizens and residents of Muslim descent. They cooperate because they would simply never trade their decades-long effort and investment in becoming full-fledged French citizens—even in the face of latent xenophobia and social discrimination—for the vagaries of Islamist radicalism, which would make all of them suspect, and offer a political space for the extreme Right.

Much of this French success has to do with how the term “Muslim” is used in political parlance, where the preference is for expressions like “of Muslim descent” or “from Muslim culture.” This stems from the French notion of laïcité—loosely translated as “secularism”—which has been a backbone of French culture ever since its implementation under the Third Republic in the early twentieth century. To resist the overwhelming influence of a Vatican-aligned, reactionary Catholic church that interfered in both education and politics, the French government passed a law separating church and state in 1905, severing the historic link between Paris and Rome. The French conception of religion in the public sphere is thus quite different from the ascriptive understanding of religion found in Britain or America—a difference illustrated by the fact that the British national census asks respondents to define themselves in religious terms. By contrast, its French laïque counterpart merely defines religion in sociological and cultural terms, provided the concerned individuals agree on that identity to which they are, by the by, entitled to be indifferent—even hostile.

Thus, in France, a community that would encompass all “Muslims” a priori is politically impossible—and without that, there can be no political brokers or “community leaders” who monopolize representation of “Muslims” (or at least pretend to do so). This was no more evident than in the French government’s attempt to reconcile the differences between Islamic factions by creating the French Council of the Muslim Faith (CFCM) in 2003. The hope was to make peace between different Islamic groupings so as to facilitate the free exercise of Muslim religion, organize pilgrimages to Mecca, ensure access to halal foodstuffs in the army, corporations and restaurants, and build mosques by which practicing Muslims would have the same rights and advantages as believers in other faiths. At the same time, then–Interior Minister Nicolas Sarkozy, who professed an “open” understanding of laïcité that relied more on religious leaders as role models, wanted to use the CFCM as a go-between with practicing Muslims.2 But the differences between Islamic factions, be it because of their doctrinal tenets or the fierce competition between the foreign states that influence some of them (Algeria, Morocco, Saudi Arabia, Iran, etc.), never allowed the CFCM to emerge and find a role that would resemble other united religious mouthpieces, whether the Bishops’ Conference or the Representative Council of French Jewish Institutions (CRIF). Overall, the dominant narrative in France has always been to be French first and foremost. Religious identity continues to take a backseat to citizenship in the Republic.

IN THE UK, things were happening quite differently. From the beginning of mass migration in the 1950s, British Muslims organized as such and started to establish mosques on British soil. The segregated experience of the Muslim community under the Raj was duplicated in Britain, except this time the majority population was not Hindu, but the white English working class with its beer-on-tap-and-bacon culture. Meanwhile, intra-Muslim sectarian and denominational strife led different groups to create their own enclaves. The Deobandis wanted to have their own places of worship, as did the Barelvis, a Sufi-oriented sect considered heretic by its rivals, with a special reverence for the Prophet Muhammad. The same went for the Ahl-i-Hadith, a puritanical group close to Saudi Wahhabism. When British authorities tried to provide brick-and-mortar mosques to replace makeshift prayer rooms, they faced upheaval. The Deobandis, for instance, refused to pray behind a Barelvi imam who sang the praises of the Prophet in terms the Deobandis saw as close to idolatry, and things degenerated into fistfights as the disparate sects tried to control the pulpit in the so-called cathedral mosques. Though this might seem to echo Sarkozy’s futile attempts to mediate between the different Muslim groups in France, there is one key difference: for British Muslims, that religious identity has always come before all others, whatever the infighting between different sects may be. In France, it was the wide array of available identities—Islamic, Algerian, working class, unionized, leftist, laïque and what have you—that made the concept of Muslim categorization secondary at best.

This secluded British-Muslim religious identity led to a far more introverted social life than was the case for North Africans in France. Though curry may have replaced fish and chips in British stomachs, the practice of seeking a consort in the extended family (biradari in Urdu)—which led fathers to travel yearly to Mirpur or Punjab so as to bring back to Manchester or Bradford suitable, non-Anglophone husbands for their British-born and -educated daughters—perpetuated a cultural isolation.

It is this insular Muslim practice that led to Salman Rushdie’s The Satanic Verses; a frontal attack on that immigrant seclusion. The book aimed to undermine it with a vitriolic criticism of the religious tenets of Islam. In particular, it mocked the Prophet and his many wives, describing his abode as a brothel. Though the names were changed, and the novel was a work of fiction, Rushdie wanted to rock the foundations of British-Muslim life, and force his coreligionists to reconsider their self-segregation and begin to integrate into British society. But his ambitious project backfired. Far from serving as a liberating cri de coeur, The Satanic Verses only reinforced the grasp of radical mullahs on their communities. The parochial old-timers who knew little English and who had no real interaction with British authorities (except when they traded their vote banks for community control) proved incapable of taking up Rushdie’s challenge. And they gradually were replaced by better-groomed, younger preachers, some of whom had links to radicalized international Islamist organizations.

The book burning of The Satanic Verses by the Bradford Council for Mosques in front of the city hall of that derelict Yorkshire Victorian city in 1989 was originally intended to express to the larger public the pain and suffering of Muslims who felt insulted by Rushdie’s blasphemy of the Prophet Muhammad. But it produced quite the opposite effect on TV viewers: book burners were seen as fanatics performing an auto-da-fé tantamount to the Spanish Inquisition or Nazi Germany, and they got no sympathy from the press. British perceptions of Islam’s fanatical response were cemented a month later on February 14, when Ayatollah Khomeini sent a valentine to Britain in the form of a fatwa condemning Rushdie, his publishers and his translators to death. The leader of the Islamic Republic was attempting to regain his status as the champion of oppressed Muslim masses worldwide—a status that had been seriously challenged by the victory of U.S.-backed Sunni jihadists in Afghanistan, who had compelled the Red Army to pull out of the country on the following day—February 15. On the British political stage, the infamous fatwa meant that all of a sudden, the UK (and the rest of Europe and the world by the same token) had become part of a virtual Dar al-Islam (abode of Islam) where the rules of sharia—or Muslim God–inspired law—would apply, punishing blasphemy (or, for that matter, “insult to the Prophet”) with death.

The Rushdie affair was in a way quintessentially British. It happened in the context of a political scene divided along communalist lines, and it triggered reactions from community leaders and ordinary believers who felt threatened in their imposed and self-imposed seclusion, a situation that made them unable to distance themselves from the defensive attitudes of their peers.

On the other side of the Channel, where men and women of Muslim descent were not organized in this way, and where imams retained far less influence than their opposite numbers in Great Britain, the Rushdie affair did not mobilize any Islamic outbursts, save for a tiny group of radicals led by two recent converts to Islam, the grandchildren of Maurice Thorez, the deceased strongman of the French Communist Party, who took to the streets in front of journalists who widely outnumbered them.

NEVERTHELESS, 1989 was also a watershed year for Islam in France, and it pinpointed the difficulties of the traditional republican and cultural-integration model. While the French were supposed to be celebrating the two hundredth anniversary of the fall of the Bastille and the triumph of Enlightenment, and the rest of the world was focused on the end of the Communist era, the French press was obsessed with an entirely different affair. Three teenage female pupils of Muslim descent had entered their classes at a middle school in a northern Paris banlieue wearing hijabs, the Salafist “Muslimwear” that was steadily imposed on Muslims worldwide—through the expansion of Wahhabism and the Muslim Brother subculture—into the expression of Islam in the public sphere.3 This piece of cloth placed hijab-wearing French public-school students in a cultural cluster and separated them from their classmates—on the basis of a proclaimed religious identity.

It seemed the decades-long French philosophy of laïcité had come back to haunt the country. Its detractors saw this policy as insensitive to cultural differences. And this view was not confined to Muslims in France. Americans and Brits alike mocked the country as closed to the other, draped in the rags of its past glory, an obsolete singleton in a globalized world. And the goal of cultural integration was lambasted as “assimilation”—a term with particularly bad connotations, no more so than in some Jewish circles, where it is tantamount to “cultural genocide.” The stakes were high, the debate highly political. Both the French branch of the Tablighi Jamaat—an Indian Islamist movement preaching cultural seclusion from the non-Muslim environment—and the local Muslim Brothers supported the girls, and in the case of the latter, used the affair to pretend that they were the choice representatives of a “Muslim community” that was in the making on French soil. They changed their name from the Union of Islamic Organizations in France (UOIF) to the Union of Islamic Organizations of France, in an attempt to show the professed new stage the group had reached in its process of asserting Islamic identity in France. As one of its leaders explained to me, they no longer considered France a land of temporary residence for “Muslims”; many now called it home. Hence, it was no more a part of Dar al-Solh (or “abode of contract”), a foreign territory where Muslims could stay temporarily and where sharia was irrelevant. It had become part of Dar al-Islam, where sharia applied for Muslims who so wished.

If sharia was not state implemented, it was the right of every French “Muslim” to enact the laws. The three hijab-wearing pupils were the first manifestation of the UOIF’s new policy—its bid to be the community leader of the “Muslims of France” and the champion of an exemplary cause.

As much as the Rushdie affair was evidence of the contradictions of Britain’s relationship with its Muslim citizens, the hijab affair was typically French. It could never have taken place in the UK, where it had long been common practice for schools to welcome the hijab, segregate Muslim female pupils from sporting and swimming classes with their male counterparts, and so on and so forth.

The question then for Paris was whether “liberty” should come first, or was education to provide a space free from political, religious and similar statements—based on the other tenet of the Republic: “equality”? When the UOIF and their fellow travelers from the multicultural Left—along with the allies they made on that occasion among the Catholic clergy, Protestant pastors and some conservative rabbis—made their claim in the French public sphere, they used the political language of freedom. They cast themselves as the opposition to the authoritarianism of the Jacobin, laïque fundamentalist, assimilationist state. Some Islamist militants even took to the streets wearing a yellow star under their hijab or beard, implying they were persecuted like the Jews had been by the Nazis (that line was difficult to carry on and introduced confusion into the minds of some otherwise anti-Semitic and anti-Israeli radicals). Yet, when Muslim youths were instructed to wear the hijab by the Tablighis, Salafis or Muslim Brothers, it was not a matter of freedom, but of religious obligation. Notwithstanding such internal contradictions (of which the French press and public debate were largely unaware), the hijab affair poisoned the educational environment. Endless litigation and demonstrations that benefited radicals who portrayed themselves as victims of state repression followed. However, in spite of all this apparent distaste for laïcité, in the end there was very little support for the hijab cause, and certainly no mobilization of an improbable “Muslim community” that the UOIF and its ilk wanted to bring to life. The fact that during this time the Algerian civil war—which subsequently spilled over into France—was fully aflame, and still French Muslims largely ignored the call to jihad, is the starkest evidence of how little sway these radicals held over the so-called Muslim community.

SUCH WAS the backdrop for 9/11 on each side of the Channel. In France, the trauma of the Algerian civil war—with the casualties caused by Algerian-linked terrorism on French soil, the terrible death toll in Algeria itself, and the political and military defeat of Islamist insurgents in 1997—had three main consequences. First, there was little love lost on the part of French citizens or residents of Muslim descent for the kind of radicalism and terrorist attacks they had both experienced and suffered. In France, 9/11 was viewed as Act II of the same play. Second, the repression of the Islamist rebels in Algeria had destroyed networks and movements that might otherwise have spilled over into France. And third, French security and intelligence forces were trained in vivo to trace and eliminate Islamist terrorist networks. They had a sound, direct and on-the-spot knowledge of such groups and of their international connections, and state policy would not allow foreign radical Islamists to obtain political asylum in France.

In the UK, on the other hand, where Muslim communities were organized and represented by leaders and brokers who had sizable followings, the state had minimal direct interaction with such populations, mirroring the days of the Raj when communalism was a mode of government. As opposed to the French, who had banned foreign Islamist leaders from entering their country, British authorities granted asylum to a vast array of them—including the Egyptian Abu Hamza al-Masri (aka, “Hook”), Abu Qatada al-Filistini from Palestine, Syria’s Abu Musab al-Suri and many others—who acted as important contributors to the production and dissemination of Salafist-jihadist literature, and audio, video and Internet propaganda. All were veteran jihadist fighters from Afghanistan in the 1980s who had supported jihad in Egypt, Algeria, Bosnia and Chechnya in the 1990s. They created an underworld of sorts, labeled “Londonistan” by the Arab press.

Their presence in Britain was rationalized; politicians argued that the former jihadists would abstain from radicalizing local British-Muslim youth. The asylum seekers were Arabs, the British Muslims were from the subcontinent, so it looked as if there would be a major cultural gap between them in any case. More so, continuing the long-held British tradition, cultural identification of Muslim communities with their new homeland was by no means a priority in the multicultural-tinged “cool Britannia” of the Blair years. More than ever, Muslim immigrants retained ties to their countries of origin—something that would prove disastrous as Pakistan experienced a steady Talibanization from the mid-2000s onward, and Britons of Pakistani descent visited the country every year to revive family networks, shop for consorts for their children and partake in the political strife of Pakistan. Worse, an activist minority spent time in radical madrassas of the Deobandi sect, and in the training camps of the Taliban and other jihadist guerillas.

BUT THIS is not to say that all was well in France. The hijab issue remained an irritant, and in the spring of 2003, then-President Jacques Chirac convened a committee of experts, the Stasi Commission (named for its president, French politician Bernard Stasi, and of which I was a member), to examine whether laïcité was threatened, and how to deal with the issue in a society much changed from the Third Republic that mandated separation of church and state almost a century before. The commission recommended that the wearing of ostentatious religious signs (whether it be hijab, cross or yarmulke) be forbidden in schools benefiting from state funds (public or private). The ban was limited to students who were minors. Once in college or university, they were deemed mature enough to dress as they liked.

The hijab prohibition was met with incomprehension. Paris passed the law in the spring of 2004 to take effect in September, a decision that produced an outcry in Islamist and multiculturalist circles worldwide. In France, the UOIF organized demonstrations that were widely covered and hyped on Al Jazeera—where a Muslim Brother was at that point editor in chief. In late August, the “Islamic Army in Iraq” took two French journalists hostage, and threatened to kill them unless the “anti-hijab law” was rescinded. Much to the surprise of those who believed the Al Jazeera coverage, the wide majority of French citizens of Muslim descent supported the hijab ban. Many took to the streets and went on the air to express their total rejection of a terrorist group that had hijacked their voice. And the UOIF was compelled to backpedal, its spokeswoman offering on TV to take the place of the hostages so that her hijab would not be tainted by innocent blood. That was the end of the hijab turmoil. To date it is no longer worn in schools, and the UOIF decided to drop its efforts to overturn the law (in any case, its campaign has lost steam since 2004).

So France’s policy of laïcité seemed to be vindicated. But a year after the 2004 hijab dispute, the banlieues outside Paris exploded in violence. It was as if all the French had to say for the success story of their cultural-integration model fell short. Upward social mobility was nowhere to be found for many of the migrant youth living in the banlieues—the only contemporary French word that has since made its way into international idiom and needs no translation!4 When young people of migrant descent (some, but not all, Muslims) started burning cars in these infamous neighborhoods in the autumn of 2005, it provided Fox News with vivid coverage (“Paris Is Burning”) filled with “Muslim riots” and “Baghdad-on-the-Seine” nonsense. Meanwhile, pro-war-on-terror pundits ridiculed then-President Chirac and then–Prime Minister Dominique de Villepin for their opposition to the Iraq War using a chickens-come-home-to-roost logic. Yet all academic studies in the aftermath of the riots amply demonstrated that they had little if anything to do with Islam per se; instead, they were due to a lack of social integration and economic opportunities. The rioters wanted to get the public’s attention drawn to these issues—a far cry from any urge to establish a radical “Islamistan” in the banlieues. The riots, then, were an appeal for further social integration, something that the same controversy-ridden Stasi Commission understood well, and proposed to deal with via new urban planning to destroy the ghettos and the institution of Yom Kippur and Eid al Kabir as school holidays—these and other attempts to respect diversity were summarily ignored.5 Media interest soon moved on to the next story, and there was little public awareness of these findings.

IN BRITAIN, where Tony Blair had planned to invade Iraq since 2002 alongside George W. Bush, the prime minister felt confident that government support of domestic Islamist communalism would grant him immunity from British-Muslim criticism of the “invasion of a Muslim land by infidel armies,” and would not lead to retaliation in the form of jihadi-inspired terrorist action. Alas, this was not to be. Pakistani radical networks lambasted British (and American) policy. So too did al-Qaeda and the Taliban. Scores of these British-Muslim activists, who had spent time in the Taliban’s schools and camps, rallied to the extremist cause. Deputies of radical Islamist groups in the UK stopped all collaboration with British authorities, and as Her Majesty’s security services’ grassroots knowledge of Islamist whereabouts had relied to a large extent on community leaders, there were suddenly a number of blind spots in the general surveillance of radical groups and individuals, particularly in provincial areas removed from London. Agencies discovered belatedly that the Arab luminaries of Londonistan had learned English and were bonding with the subcontinental English-speaking youth from Bradford to East London. This dangerous environment provided the background for the July 7, 2005, attacks. The suicide bombings in London were perpetrated by English-educated British Muslims from Yorkshire. Their prerecorded will, broadcast by al-Qaeda and introduced by no less than Ayman al-Zawahiri, starred the chief of the group, Mohammed Siddique Khan, declaring in heavily accented working-class Yorkshire English that he was a fighter in the war against infidels who had invaded Iraq and Palestine. By the end of July 2005, another suicide attack was narrowly avoided. In the summer of 2006, a major plot to bomb transatlantic flights between London and New York with liquid explosives was foiled at the eleventh hour. In 2007, another plot half-succeeded when a car laden with explosives (which failed to detonate) barreled into the entryway of the Glasgow airport.

Since 2007, and Tony Blair’s departure, there has been a major review of British policy. The government of Gordon Brown has painstakingly tried to fashion a concept of “Britishness” as part of its “deradicalization” policy aimed more at integrating Muslim youths into the wider British community. The shift from multiculturalism coupled with the intelligence-agencies-issued report Preventing Extremism Together definitely brings policies on both sides of the Channel much closer than they ever were in the past. The issue of social-cum-cultural integration remains a crucible for populations of Muslim descent as they seek to identify politically with their Western country of residence, adoption and, increasingly, birth.

AS THE United States now faces home- grown terrorism, in the form of Nidal Hasan’s Fort Hood massacre and the “underwear bomber” Umar Farouk Abdulmutallab’s near detonation of a plane bound for Detroit, it is certainly worthwhile to analyze Europe’s relationship with its Muslim residents in a less patronizing way than was the case both in the warmongering parlance of the neocons and President Obama’s naive Cairo speech last year. While the present administration just granted a long-denied entry visa to Islamist intellectual Tariq Ramadan, and so seems to be following the Tony Blair model (which counted on Ramadan to pacify the Muslim ranks in Britain after 7/7, that is, until the prime minister and the preacher had a falling out), it might indeed be wise to evaluate the European experience in all its dimensions. The “special relationship” may not be all that is on offer. Old Europe has, after all, been the neighbor of the Muslim world, has colonized some of it and now has integrated part of that world into its very identity. While some predict that, in a few decades, Europe will be but the northern part of the Maghreb, one may equally surmise that North Africa and the Middle East will be far more Europeanized.

Gilles Kepel is a professor and chair of Middle East and Mediterranean Studies at the Sciences-Po, Paris, and the Philippe Roman Professor in History and International Relations at the London School of Economics.

1 The French legal term Association de malfaiteurs en vue d’une entreprise terroriste (criminal association with a terrorist aim) allows the judiciary to keep terrorism suspects in custody for seventy-two hours before they are charged or freed (as opposed to twenty-four hours in other cases), which increases the chances that suspects will be destabilized enough to give away their networks, and allows the police enough time to take action. Such emergency measures are taken under the control of an antiterrorism-habilitated judge. Judge Jean-Louis Bruguière, one of the most successful French antiterrorism judges of the 1990s and early 2000s, told me that this legal measure was the key to French success, and also made any Guantánamo-type decisions unnecessary.

2 In his visit to Saudi Arabia in January 2008, President Sarkozy addressed the Saudi Majlis al- Shura (nonelected Parliament), praising religious figures—including imams—for their role in society, one that he considered unmatched by secular educators and the like. Though it is true that the French state-school system is undergoing a crisis with regard to its former central role toward cultural and social integration of youth from all walks of life and inherited cultures, the advocacy of its replacement with religious figures was met with an uproar in many French circles.

3 Wahhabism is a puritanical understanding of Islam that follows the teachings of Muhammad Ibn Abd al Wahhab, a late-eighteenth-century preacher. Wahhabis alinged their sect with the Saud family, allowing for the creation of the Saudi Arabian state. It was marginal in the wider Muslim world until oil wealth fuelled its export as a means to fight socialism in the postwar Arab and Muslim countries. “Wahhabis” prefer to call themselves “Salafis” (“following the ancestors,” i.e., strictly observant of pristine Islam). They abhor any kind of worship of a human being. But all Salafis are not Wahhabis. The society of the Muslim Brothers was founded in Egypt in 1928, with the political aim of establishing a Muslim state, abiding by sharia laws. In spite of their diverse interpretations of Islam, Wahhabis, Salafis and Muslim Brothers share the same subculture that makes the tenets of Islam permeate every dimension of daily social and cultural life.

4 In 1987, when I published a study on Islam in France entitled Les banlieues de l’islam (Paris: Editions du Seuil), I had to translate the title as “The outskirts of Islam” to make it understandable to the English-speaking public, and explain that such outskirts were not suburbia, rather “inner cities” (UK) or “ghettos” (United States). All that confusion stopped when Anglophone media pundits started using banlieues as a catchword for lambasting French policies of integration, in particular during and after the so-called “Muslim riots” of the fall of 2005. See chapters 4 and 5 of my Beyond Terror and Martyrdom (Cambridge, MA: Harvard University Press, 2008) for more information.

5 Though the French government foolishly rejected the commission’s proposals at the time, it subsequently espoused a number of the Stasi Commision’s additional policy suggestions. By then it was too late to affect the situation.

May 29, 2010 Posted by | Muslim world, War | , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Arms across the ocean

The Cold War was both an era of armed peace and global violence. The United States and the Soviet Bloc may have avoided the nuclear annihilation that many feared, but the rest of the world saw little peace between 1946 and 1989. The chilling concept of Mutual Assured Destruction added a sinister novelty to what was, in essence, a simple continuation of the geopolitics of imperial rivalry that have been a hallmark of the modern age. Europe was divided, but not in ruins; the actual wars of the Cold War were fought in Asia, Africa, and elsewhere, as the United States and the USSR shot at each other by proxy.

In retrospect, the long duration of the Cold War was perhaps not a surprise; but its quick end, when the Berlin Wall and the Soviet Union fell with a sudden whoosh, caught many off guard. Grey, dreary, and oppressive, the Soviets nonetheless showed indomitable staying power over the years, crushing dissent when they had to – in Budapest in 1956 or Prague in 1968 – and alternately threatening and courting the West. Though the “war” ended two decades ago, it continues to define our sense of the latter half of the 20th century, and its ideas and stances have not yet ceased to influence the outlook of its former participants. The divisions within the West over the battle against communism remain stark, and the big questions – What accounted for the Soviet downfall? How did the West prevail? – are still a matter for fierce argument among the ageing Cold warriors still with us.

The intensity of these long-distant debates is more than apparent in the maddeningly idiosyncratic new book by the British historian Norman Stone, The Atlantic and its Enemies. As a disinterested general overview of the Cold War, Stone’s book is of dubious value. His account is, as the subtitle explains, “a personal history”. The former Oxford professor of modern history and now director of the Russian-Turkish Center at the University of Bilkent, Turkey, Stone is a legendarily colourful character. (At Oxford, he conducted tutorials over billiards and glasses of Scotch). Vehemently opinionated and mordantly witty, Stone’s personality barrels across the Atlantic at hurricane force. 

His history lessons come with plenty of ad-hominem thunderbolts: John F Kennedy was “a hairdresser’s Harvard man”; “[Jimmy] Carter’s regime symbolised the era. It was desperately well-meaning. It jogged; it held hands everywhere it went with its scrawny wife”; “Nancy Reagan was a face lift too far,” and so on.  

However, such quips distract from the seriousness of Stone’s often trenchant analysis. Stone, who is fluent in Hungarian, Polish, Czech and German among other tongues, is well-equipped to report from the trenches of this global struggle. He is a former speechwriter for Margaret Thatcher (who emerges, unsurprisingly, as the great heroine of this tale), but he is no end-of-history triumphalist. He says Marxism had much to recommend for analysing peasant economies of the postcolonial third world; he just vigorously disagrees with the prescriptions. Though he salutes “the extraordinary vigour of the capitalist world”, one of his themes is how the Western alliance tended to fumble economic issues. For a time, it was the Soviet Union and Communism that seemed to have the answers. 

In the immediate years after the Second World War, the British Empire, exhausted and financially prostrate, surrendered its place as a global power, creating an imperial vacuum into which the United States and the Soviet Union quickly moved – leading Stone to dub the whole mess “the war of the British succession”. The Allies had little claim on checking Stalin, whose armies had suffered immensely in the brutal struggle for the Eastern Front. He would have a sphere of influence in Europe, and his Communist allies in Poland, Hungary, and Czechoslovakia took power, however ruthlessly. In Asia, Mao triumphed in China and Ho Chi Minh defeated the French at Dien Bien Phu. The US contested these advances, in Europe with defence guarantees and monies in the form of the Marshall Plan, and in East Asia with armed interventions on the Korean peninsula and Vietnam. Britain’s last stab at imperial assertion, meanwhile, was the utter disaster at Suez in 1956.

But in Stone’s telling, it is economics, not armaments and military manoeuvres, that take pride of place. His vignettes on the Korean War and the Cuban missile crisis have the feel of a school primer. But on the economic issues confronting the West, Stone mounts a bold, if not altogether persuasive, argument. For Stone, the spectre haunting the West was not communism, but Keynsianism. America and Europe boomed through the 1950s and 1960s. In Western Europe, it seemed, social democracy could deliver the goods, literally: France had refrigerators and West Germany, washing machines. “Nato developed its own financial military complex,” he writes, “and the central banks were part of it.”

Still, financial arrangements in the Atlantic world were ever precarious. The dollar – and its crucial adjunct, cheap oil – underpinned the whole system, but by the end of the 1960s, this hard-won stability was starting to break apart. The United States, pouring money into the war in Vietnam and into LBJ’s Great Society programmes, unleashed waves of inflationary pressures that, combined with oil shocks of the 1970s, would bring about a sea change for the Western economies. Inflation was the genie unleashed from the bottle, and getting it back in would vex governments across the Atlantic world.

Reviewing the decade, Stone finds little good to say about this turn in the West. It had become “extraordinarily self-indulgent”. He approves of the coup in Chile that brought Augusto Pinochet to power (with not a little bloodshed) and the economic reforms the General put into place after seizing the presidency.

He commends Helmut Schmidt’s gestures to the USSR and East Germany – the so-called “Ostpolitik” – and generally rhapsodises about the performance of the German economy, but for Britain his scorn is unrelenting. “Since 1815 Germans had been asking why they were not English. After 1950, the question should have been the other way about: why was it preferable to be German?” America’s central partner in the Atlantic alliance was in thrall to the unions – Stone hates them – and spent money ontoo generous a welfare state: “The overall Atlantic crisis was displayed at its worst in England.” (He refers to nationalised industries as “a sort of non-violent protection racket.”) Stone spends a great deal of time in trade ministries looking at currency flows and trade imbalances, but here misses an opportunity to look at the broader intellectual contest about economics and society.

Stone does not have much to say about the social history of these decades, or the ideas that animated it. He is scathing about the student movement of the 1960s – the expansion of universities, he suggests, was a mistake – but he pays very little attention to debates that played out in magazines, journals and op-ed pages. If the Cold War was, in fact, a kind of intramural argument within the West about how best to organise society and politics – whose roots date back to the same Enlightenment – Stone acknowledges it only incidentally, by focusing so relentlessly on the disputes within the capitalist bloc. Thatcher’s heroism, in his account, has less to do with her opposition to communism than her defeat of the moderate Tory “wets” and the unions inside Britain.

His disjointed narrative attains a certain momentum only with the arrival of the Iron Lady, “who knew when to be Circe and when to be the nanny from hell”. Stone is a partisan, and he cheerleads for the supply-side economics favoured by the prime minister and her American partner Ronald Reagan; he approvingly cites Reagan’s quip about the US government, “If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidise it.” This, in a nutshell, is what strikes Stone as wrong with the welfare state economics of the West.

Stone’s account ends rather abruptly, with a whimper, not a bang. He hazards no thoughts about the legacies of the Cold War, its metaphysics and the habits of mind it spawned, and their implications for Europe and the rest of the world. For Stone, it is enough to say that the 1980s “had been the most interesting, by far, of the post-war decades”. A united Europe, the crisis in Greece notwithstanding, is a successful by-product of the Cold War’s end; yet the legacy of the Cold War is still, in some senses only coming into shape.

May 29, 2010 Posted by | Cold War, Off Topig | , , , , , , , , , | Leave a comment

Did a Child Pick Your Strawberries?

Meet Luz, a 9-year-old American who worked 13-hour days in the fields, skipping school and poisoned by pesticides. Zama Coursen-Neff on the shameful fate of hundreds of thousands of kids.

BS Top - Coursen-Neff Child Farm Workers

“Luz” was 9 when she began working in the fields. Her employer paid her not by the hour, but according to how much fruit she picked. On many days, she would not even stop and rest. “We keep on going because if we were to sit down and take a break we’d make even less,” she told me during a Human Rights Watch investigation. Even so, Luz earned well below minimum wage.

By the time she was a teenager, Luz was often working 13-hour days, when she wasn’t in school. Her employer gave her no choice about hours. “No one can leave. They block the exits and say everyone has to help out.” She fell behind in school and said most of her friends had dropped out. She was often sick from exposure to pesticides. “You could see it all around, and you were breathing it. … My stomach was always heaving. Every single day.”

The conditions Luz describes are typical of child laborers I have interviewed in India, El Salvador, Indonesia, and other poor countries around the world. Luz, however, now 18, works in the United States.

For hundreds of thousands of child farm workers, the U.S. might as well be a developing country. These children aren’t working on their families’ farms. They work for hire, hoeing cotton and sorghum in scorching heat, cutting collard greens and kale with sharp knives, and stooping for hours picking zucchini and cucumbers. Luz began picking strawberries in Florida, then started migrating in the summers to Michigan to pick blueberries. Like Luz’s friends, at least 45 percent of child farm workers never finish high school. Without an education, they face a lifetime of back-breaking work and poverty-level wages. And while most of these children, shockingly, are in the United States legally, those who are undocumented are especially vulnerable to exploitation from employers who know they won’t complain.

Over the last year, I have interviewed dozens of children who did farm work in 14 states across the country. Most began working full-time at age 11 or 12 on days they weren’t in school—and some on days when they should have been. They said that 10-hour days were typical, and during peak harvest season, they sometimes worked 14 hours or more. Some told me that at the end of the day, they were so exhausted they could barely change out of their clothes before falling asleep.

Shockingly, these conditions are perfectly legal under U.S. law, which allows children to work on farms at far younger ages, for far longer hours, and under more hazardous conditions than in other jobs. American teenagers have to be at least 14 to get even a cashier’s job at McDonald’s, where on a school day they are only allowed to work for three hours. But to pick the food that is served in fast food restaurants, children can work at age 12 for unlimited hours, day or night—as long as they don’t work during school hours. Even that rule often goes unenforced.

These disparities in the law are even more disturbing considering that agriculture is one of the most dangerous occupations in the country. Working with sharp tools and heavy machinery, exposed to dangerous pesticides, climbing up tall ladders, lugging heavy buckets and sacks, children get hurt and sometimes they die. According to the Bureau of Labor Statistics, the risk of fatal injuries for farm workers ages 15 to 17 is 4.4 times that of other young workers.

Despite the risks and grueling work, many child farm workers feel compelled to help their parents pay the bills. According to the most recent data, the average adult crop worker makes less than $13,000 a year, leaving many farm worker families desperately poor. Better enforcement of minimum-wage laws would reduce the pressure many farm workers feel to take their children into the fields. But the fact that exploitative child labor in agriculture is legal also presents it as a legitimate choice for parents, children, and employers. Some parents later regret their decision when they see its toll on their children’s health and education. In Texas, one mother said to me, “I tell my daughter, ‘I’m so sorry I stole your childhood from you.’”

The United States’ failure to protect child farm workers not only puts children at risk, but is deeply hypocritical. The U.S. spends over $25 million every year—more than all other countries combined—to eliminate child labor in other countries, yet it tolerates exploitative child labor in its own backyard.

For over a decade, members of Congress have repeatedly introduced legislation to update U.S. laws and eliminate the dangerous double standard that puts child farm workers’ health, safety, and education in jeopardy. Such a bill is pending now. But child farm workers like Luz have no powerful lobbyists, and their concerns are not considered politically pressing.

As the new growing season starts, children like Luz are already leaving school to pick lettuce, spinach, asparagus, and other crops. Without action in Washington, their futures will not be much better than those of children toiling in the developing world.

May 29, 2010 Posted by | Home & Family | , , , , , , , , , , | 1 Comment

My Drive to Save African Sex Workers

BS Top - Gbowee Congo A sex worker waits in a corridor in a center run by Doctors Without Borders in Kinshasa, Democratic Republic of Congo. (Lionel Healing, AFP / Getty Images) On a trip to Congo, peace activist and Daily Beast Africa columnist Leymah Gbowee witnesses the violent arrest of a refugee girl forced into a bleak life as a sex worker. Inside her rage and helplessness.

In the past two weeks I have cried angry tears on more than occasion. Each time, it has been because of the tragic fate facing a young African girl.

On April 24, I attended the funeral of a young Liberian refugee who had died of AIDS-related causes. I met this very promising young woman in September 2002 at the West African Peacebuilding Institute, when I was leading a women’s movement for peace in my home country, Liberia. A friendship blossomed and she joined our Mass Action Campaign. In 2003, she was the youngest member of the sit-ins for peace held in Accra, Ghana. At the time she told me she wanted to be a journalist.

The men pulled the girl out of the hotel room clothed only in panties, kicking and punching her for resisting. I ran to follow and she reached out to me, pleading for help.

A month ago, I got a message from a friend that this young girl had suffered a stroke, was diagnosed HIV-positive, and had a brain infection. She died on Easter Sunday.

At her funeral, I cried for a life that had been wasted and wondered how many more young Africans with fine prospects are losing theirs lives based on limited choices—a direct result of their economic status.

Take Action: To help build the African sexual rights movement, support Akina Mama wa Afrika, whose goal is to empower sex workers to stay healthy and improve their lives. Learn more through AMwA’s partners, HIVOS and the Open Society Institute. I was once a young refugee myself, and I endured the constant harassment of men who imagined that every refugee girl was ready to have sex for some form of cash. At the funeral of this young woman, there were many other refugee girls who came skimpily dressed. I kept asking myself, ‘How can we help them, how can we reach them, how can we as African women ensure that they don’t all die because some man neglected to protect them?’ ”

These questions continued to haunt me as I traveled to Congo to do some work with Congolese women, who live in a nation plagued by war and mass rape. On the fourth day of the trip, I was sitting in my hotel room chatting online when I heard a scream: “Somebody help me!”

Article - Gbowee Sex Workers Nicolette Bopunza, age 14, stands outside her house in Mbandaka, Congo. She works as a sex worker, charging about 50 cents for sex and $2 for a whole night. (Per-Anders Pettersson, File Photo / Getty images) The activist, mother, and feminist in me ran outside in only a piece of wrap and a shirt, to see a young Congolese girl on the balcony of a hotel room crying. She was shouting in not so perfect English, “This is all you wanted, sex me and throw me out! You are a bad person! I don’t ever want to see you again!” I continued to stand and watch. She said to me, “Mama help me, after being bad to me, now he is calling the police.” In less than five minutes, about four police officers and several men in plain clothes came running to the scene. The girl was pulled off the balcony and back into the room, as she screamed for help and asked, “What is my crime?”

The men then pulled her out of the hotel room clothed only in panties, kicking and punching her for resisting. I ran to follow and she reached out to me, pleading for help. I asked the guy who was apparently the commander of the whole operation to release her to me, but he said no, that she was a constant problem for the hotel. More men continued to pound and kick this girl.

As three police officers and two plainclothes men dragged her away, she tilted her head in my direction and said, “Mama, help me!” They had handcuffed her, and she was obviously headed for a police cell. What became of her in the cell that night, only God can tell.

All I could do was join her in screaming. My scream was for my own helpless state at the moment, for her pain, and for the many young African girls living in situations of conflict, who are constantly being exploited. My scream and angry tears were for the psychopaths we call leaders in Africa, who give us nothing but pain and misery. My scream was for the misery of so many African women who live at the mercy of men and boys.

My scream blended with the screams of my Congolese sisters. The rage subsided after 15 minutes and all that was left was a feeling of helplessness for every African refugee girl. I believe even as they walk the streets offering sex, they are looking at us African women, the seemingly strong ones, and screaming silently like the Congolese girl I saw: “Mama, help me!”

Take Action: To help build the African sexual rights movement, support Akina Mama wa Afrika, whose goal is to empower sex workers to stay healthy and improve their lives. Learn more through AMwA’s partners, HIVOS and the Open Society Institute.

May 29, 2010 Posted by | Sexuality | , , , , , , , , , , , , , , , , , , , | Leave a comment

The Myth of Africa’s Economic Miracle

Africa is doing better than ever economically, but many regular people remain desperately poor. Kofi Annan on how Africans are being excluded from their continent’s economic miracle—and how to end the crisis.

BS Top - Annan Africa Poverty

People wait outside the United Nations High Commission for Refugees office to seek permission to move to a different camp on August 21, 2009 in Dadaab, Kenya

This is an important year for Africa. The World Cup is putting the continent at the center of global attention. With Africa’s strengths and frailties under greater international scrutiny than ever before, what will the story be?

After major difficulties in the wake of the global financial crisis, African economies are recovering and proving their resilience, in contrast to gloominess elsewhere in the world. The African Development Bank and IMF foresee GDP growth rates of around 5 percent by the end of the year.

Africa’s progress should be measured not just in GDP but by the benefits that economic growth brings to all of Africa’s people.

Trade is growing too, both within Africa and with partners, including the global South. Africa-China trade has multiplied more than tenfold in the last decade. Barely a week goes by without reports of the discovery of more oil, gas, precious minerals, and other resources on the continent.

Climate change is drawing attention to the vast potential of its renewable energy supplies, including hydro, thermal, wind, and solar power. Business activity is increasing.

In short, Africa’s stock is rising, as highlighted by the Africa Progress Report 2010 released today, Africa Day. But the report also asks some difficult questions.

Given our continent’s wealth, why are so many people still trapped in poverty?

Why is progress on the Millennium Development Goals so slow and uneven? Why are so many women marginalized and disenfranchised? Why is inequality increasing? And why so much violence and insecurity?

The good news is that access to basic services such as energy, clean water, healthcare, and education has improved in many parts of the continent. But these basics are still denied to hundreds of millions of women, men, and children. Why?

In trying to provide the answers to these difficult questions, one must be wary of generalizations. Africa is not homogenous; it is raucously diverse. But its nations are linked by common challenges hampering human development and equitable growth: weak governance and insufficient investment in public goods and services, including infrastructure, affordable energy, health, education, and agricultural productivity.

Over the last decade, we have learned a great deal about what is needed. Ingredients include determined political leadership to set and drive plans for equitable growth and poverty reduction. Technical, management, and institutional capacity are vital if policies are to be implemented. Good governance, the rule of law, and systems of accountability are essential to ensure that resources are subject to public scrutiny and used effectively and efficiently.

So what is holding back progress? Lack of knowledge and a shortage of plans are not the problem. Good, even visionary agendas have been formulated by African leaders and policy makers in every field, from regional integration to women’s empowerment. Moreover, we have myriad examples of programs and projects that are making a tangible positive difference in peoples’ lives, across every field.

Given the continent’s vast natural and human resources and the ongoing, often illicit, outflow of wealth, lack of funds is not the barrier either, even though more are needed.

It is political will that is the issue, both internationally and in Africa itself.

Internationally, there are concerns that the consensus around development has been eroded by the financial crisis. Many rich countries are keeping their promises on development assistance, but others are falling badly behind. These shortfalls do not result from any decrease in human solidarity and sympathy. Nor, given the relatively modest sums involved, can they be blamed on budgetary constraints alone.

They stem more from the failure to communicate the importance of putting the needs of the least-developed countries at the heart of global policies.

Efforts must be stepped up to explain why fairer trade policies and stemming corruption are not just ethical or altruistic, but practical and in the self-interest of richer countries.

Africa’s leaders have prime responsibility for driving equitable growth and for making the investment needed to achieve the Millennium Development Goals. They can help by making the case more strongly for development policies and necessary resources.

The continent now has leaders who stand out as champions of development. We need more of them. Sadly, though, their efforts are overshadowed in the international media by the authoritarian and self-enriching behaviour of other leaders. Africa’s progress should be measured not just in GDP but by the benefits that economic growth brings to all of Africa’s people.

Africa is a new economic frontier. The approach and actions of the private sector, and of Africa’s traditional and new international partners, are crucial in helping overcome the continent’s challenges. There is a real opportunity to strengthen the new partnerships to help achieve development goals, with countries such as China and those in the Middle East, South Asia, and Latin America

African leaders need to have more confidence in their bargaining position, and greater legal and negotiating capacities to ensure that they secure deals that bring benefits to their people. Their partners, including those in the private sector and from the global South, should be held to high standards of transparency and integrity.

Political leadership, practical capacities and strong accountability will be the winning elements for Africa. The international community can play a decisive role in ensuring that the continent is playing on a level field. But Africa’s destiny is, first and foremost, in its own hands.

by Kofi Annan

May 29, 2010 Posted by | Africa | , , , , , , , , , , , , , | Leave a comment

5 Things You Need to Know About Grass-Fed Beef

The organic movement has taken the world by storm. But what’s truly healthy and what’s just hype? The manager of a grass-fed beef farm breaks it down.

BS Top - Maynard Grass Fed Beef

With skeptical, beef-centric films like Food, Inc. and Fast Food Nation encouraging the American consumer to question the source of their meat, how do you know what to ask? Mark Maynard, farm manager of the Greyledge grass-fed beef farm, offers five basic queries that will help clear up a lot.

Why is grass-fed beef better?
Beef cattle that are pasture-raised and grass-fed are a healthier and safer source of meat because they are eating a diet naturally suited to their anatomy. As a result, the herd is content, healthy, and the beef is high in Vitamin E, Omega-3s, and conjugated linoleic acid, which some studies have shown has cancer-fighting properties and could lower cholesterol. What’s more, pasture-raised beef is usually free of hormones and antibiotics.

Is it important for grass-fed beef to be certified organic?
No. Often, small, local farmers do not have the capacity to undergo organic certification. My recommendation is to find a farmer you can trust, try the beef, and make it your go-to for healthy, farm-fresh meats.

Where is the beef coming from?
Did your beef come from 3,000 miles away or even another country like Argentina? There is no way to ensure that the beef you and your family consume is healthy and humanely raised without being able to verify its place of origin. Again, it’s best to find a farmer you know and trust. The best farms and purveyors employ a source verification process, in which the record keeping of livestock includes health records, feed records, and genetic history.

Why is beef sometimes flash frozen and does that affect the taste?
Flash freezing is a process when products are rapidly frozen and then vacuum sealed in air-tight packages. This process enriches all the flavors, juices, vitamins, and minerals and allows the beef to keep perfectly for long periods. The beef remains in this condition until it is thawed, ensuring the freshness and quality from when it was originally frozen.

What is the difference between dry-aged and wet-aged beef?
These are the two techniques used for aging beef and they yield very different flavors and textures. Dry-aged beef tends to be richer, more aromatic, and pungent in flavor, and is generally regarded as a superior-tasting beef. Odds are that you have tasted wet-aged beef, as it dominates the commercial market. It also tends to be less expensive, but is no match to the taste of dry-aged beef.

May 29, 2010 Posted by | Food & Drink | , , , , , , , , , , , | 1 Comment

Five Things You Didn’t Know About Truffles

Difficult to find and divine to eat, there is much to be gleaned from one of nature’s most delicious delicacies.

BS Top - Safina Truffles 

1. Real truffle oil really does exist
There are many products out there purporting to be made from truffles, but unless you’re a label detective, searching for true truffle oil can be very misleading. Here’s a hint: if the label says USDA 100% Certified Organic, you can bet there are real, organic truffle pieces inside.

2. Dogs are the real truffle hounds
While it’s true that pigs have long been used to scare up truffles, they were also greedy little animals who often ate them as soon as they found them. After WWII, truffle hunters started using dogs, which were easier to pack in the car and preferred a meaty snack to the tasty truffle. Interestingly, because the characteristics of the ideal truffle-hunting dog aren’t specific to any one breed, mutts are usually preferred.

3. You don’t have to be a great cook to serve a truffle dish
While I enjoy cooking an elaborate truffle dinner as much as anyone, I get the biggest pleasure out of adding truffle products to everyday dishes like grilled cheese, mashed potatoes, white pizza, and salads.

4. White truffles can’t be cultivated
Both white and black truffles are found in the wild, in parts of the world where they form symbiotic relationships with oak and hazelnut trees. But while recent efforts to cultivate black truffles look promising, no one has successfully grown a white truffle.

5. Not all truffles are created equal
Beware of dishes using fresh truffles in the warmer months—summer truffles (tuber aestivum) are a poor and flavorless substitute for the real thing, black winter truffles (tuber melanosporum), also known as black Périgord truffles. It’s easy to spot the difference: summer truffles are light gray instead of charcoal-colored inside.

May 29, 2010 Posted by | Off Topig | , , , , , , , , , , | 1 Comment