Opinion

Finishing the Post-Crisis Job

LONDON – August 9, 2017, is the tenth anniversary of the decision by the French bank BNP Paribas to freeze some $2.2 billion worth of money-market funds. Those of us who were active in financial markets at the time remember that event as the beginning of the worst global financial crisis since the Great Depression.


Many economists and financial observers argue that we are still living with the consequences of that crisis, and with the forces that incited it. This is partly true. Many developed economies still have in place unconventional monetary policies such as quantitative easing, and both productivity and real (inflation-adjusted) wage growth appear to be mostly stagnant.

But it is important to put these developments in perspective. Many people, including the Queen of England in November 2008, still ask: “Why did no one see it coming?” In fact, many financial observers did warn that housing prices in the United States were rising untenably, especially given the lack of domestic personal savings among US consumers.

As Chief Economist of Goldman Sachs at the time, I had written three different papers over a number of years showing that the US current-account deficit was unsustainable. Unfortunately, these findings largely fell on deaf ears, and the firm’s foreign-exchange salespeople probably got bored passing on yet more of the same pieces to their clients.

At one point in 2007, the US current-account deficit was reported to be 6-7% of GDP (it has since been revised down to around 5% for the full year). This high figure reflected the fact that the US trade balance had been steadily deteriorating since the 1990s. In the absence of any obvious negative consequences, however, complacency had set in, and the US continued to spend more than it saved.

Meanwhile, China had spent the 1990s exporting low-value-added products to the rest of the world, not least to US consumers. In 2007, its current-account surplus was around 10% of GDP – the mirror image of the US. Whereas the latter was saving too little, China was saving too much.

For some observers, this huge international imbalance was the source of the crisis. In the years leading up to the crash, they argued that the global financial system was simply doing its job, by finding increasingly clever ways to recycle the surpluses. Of course, we now know that it performed that job rather poorly.

Much has changed in the intervening decade. In 2017, China will run a current-account surplus of 1.5-2% of GDP, and the US will most likely run a deficit of around 2% – but possibly as high as 3% – of GDP. This is a vast improvement for the world’s two largest economies.

Still, other countries have built up ever-larger current-account imbalances over the past decade. Chief among them is Germany, whose external surplus now exceeds 8% of GDP. Germany’s current account suggests that there are deep imbalances that could lead to a new crisis if policymaking is not well coordinated. The last thing that Europe needs is another sudden reversal, as we saw at the height of the Greek debt crisis.

The United Kingdom, for its part, will have a current-account deficit above 3% of GDP this year, which is nearly three times what it was ten years ago. But that is not to say that the UK’s trade balance has significantly deteriorated. Rather, it reflects the fact that the UK is a major financial center, and that investment returns have shifted more in the UK than elsewhere.

All told, the global economy today is much healthier than it was ten years ago. Many are disappointed that real global GDP growth since the crisis has undershot performance in the previous decade. But since 2009 – the worst year of the recession – the global economy has grown at an average rate of 3.3%, just as it did in the 1980s and 1990s.

Of course, this is largely owing to China, the only BRIC country (Brazil, Russia, India, and China) that has met my growth expectations for the decade (although India is not too far behind). The size of China’s economy has more than trebled in nominal terms since 2007, with GDP rising from $3.5 trillion to around $12 trillion. As a result, the aggregate size of the BRIC economies is now around $18 trillion, which is larger than the European Union and almost as big as the US.

There will inevitably be another financial bubble, so it is worth asking where it might occur. In my view, it is unlikely to emerge directly from the banking sector, which is now heavily regulated. The bigger concern is that many leading companies across different industries have continued to focus excessively on quarterly profits, because that determines how executives are remunerated.

Policymakers should take a hard look at the role of share buybacks in this process. To her credit, in the Conservative Party’s 2017 election manifesto, British Prime Minister Theresa May announced that her government would do this. One hopes that May’s government follows through. Doing so could strike a symbolic blow against the underlying malaise of post-crisis economic life. The West needs real investments and higher productivity and wage growth – not more economically unjustifiable profits.

Jim O’Neill, a former chairman of Goldman Sachs Asset Management and a former UK Treasury Minister, is Honorary Professor of Economics at Manchester University and former Chairman of the British government’s Review on Antimicrobial Resistance.

By Jim O’Neill

What Makes a Human?

ST ANDREWS, SCOTLAND – Last month, moviegoers flocked to theaters to see War for the Planet of the Apes, in which an army of retrovirus-modified primates wage war against humanity. Chimpanzees on horseback, machine-gun-wielding gorillas, and scholarly orangutans undoubtedly make for good theater. But could anything like this ever happen in real life?


In Planet of the Apes, Pierre Boulle’s 1963 novel upon which the films are based, space traveler Ulysse Mérou is stranded on a terrifying planet ruled by gorillas, orangutans, and chimpanzees who have copied their former human masters’ language, culture, and technology. The humans, meanwhile, have degenerated into brutal and unsophisticated beasts.

Much of the sinister realism in Planet of the Apes stems from Boulle’s impressive attention to scientific detail and knowledge of research into animal behavior at that time. His book tapped into the still-popular notion that animals such as chimpanzees and dolphins have complex but covert communication systems that humans cannot even fathom. Many people would prefer to think that all those “arrogant” scientists who have concluded that animals cannot talk have simply failed to decode animals’ calls.

But Boulle’s book is decidedly a work of fiction, because apes here on Earth could never actually acquire human culture solely through imitation. In reality, complex culture requires underlying biological capabilities that are fashioned over long periods of evolution. Chimpanzees simply do not have the vocal control or physiology to produce speech.

Moreover, modern apes could not be made highly intelligent even with brain-enhancing drugs. And although microbes can change behavior – such as when rabies renders its host violent and aggressive – they could never bestow language upon a species.

We know this because animal communication has been investigated extensively for more than a century, and the scientific evidence yields few hints of truly complex communication faculties in non-human species. For example, in the 1940s, researchers raised a chimpanzee named Viki in their home. But Viki learned just four words – “mama,” “papa,” “cup,” and “up” – which was more than could be said for an earlier experiment in which a chimpanzee and a human child were reared together. That exercise had to be abandoned after the chimpanzee failed to learn a single word, and the child actually started imitating chimpanzee sounds.

In the following decades, studies teaching apes sign language generated much excitement. And yet virtually all linguists would agree that the apes in these experiments had not produced language. They could memorize the meanings of signs, but they could not learn the rules of grammar.

Tellingly, utterances by “talking” apes proved to be exceedingly egocentric. When equipped with the means to talk, apes’ communications are limited to expressions of desire such as “Gimme food.” The longest recorded statement of any “talking” ape, by a chimpanzee named Nim Chimpsky, was, “Give orange me give eat orange me eat orange give me eat orange give me you.” It turns out that chimpanzees, bonobos, and gorillas make for poor conversationalists.

By contrast, within months of uttering their first words, two-year-old children can produce complex, grammatically correct, and topically diverse sentences comprising verbs, nouns, prepositions, and determiners. They can do so because human minds have evolved to comprehend and produce language.

Many scholars believe that language emerged from the use of meaningful signs. Our ancestors were immersed in a symbol-rich world, and this generated evolutionary feedback favoring the neural structures that enable us to manipulate symbols efficiently. The syntax in human language today was made possible by our ancestors’ long use of symbolic proto-languages. Genes and culture coevolved to reorganize the human brain.

The same is true of warfare, which is much more than just scaled-up aggression. In war, complex institutions dictate strict behavioral codes and individual roles that facilitate cooperation. Research suggests that this level of cooperation could not evolve in a species that lacked a complex culture and such features as institutionalized punishment and socially sanctioned retaliation.

Most of these norms are not obvious, and thus have to be inculcated, usually during youth. But even among apes that are proficient imitators, there is little compelling evidence that behaviors are actively taught. When apes do cooperate, it is largely to help relatives. The scale of human cooperation, which involves huge numbers of unrelated individuals working together, is unprecedented largely because it is built upon learned and socially transmitted norms.

There is now extensive evidence that our ancestors’ cultural activities changed the human brain through natural selection, which then further enhanced our cultural capabilities in recurring cycles. For instance, milk-drinking began with early Neolithic humans, who were consequently exposed to strong selection favoring genes that break down energy-rich lactose. This genetic-cultural coevolution explains why many of us with pastoralist ancestors are lactose tolerant.

It is little wonder that Boulle put such an emphasis on imitation. Humans are descended from a long line of imitators, who mimicked each other’s fear responses to identify predators and avoid danger. Today, this is reflected in empathy and other forms of emotional contagion that make movies a heartfelt experience. Without these traits, we would all watch movies like sociopaths, equally unmoved by a murder or a kiss.

It was also through imitation that our forebears learned how to butcher carcasses, build fires, and make digging tools, spears, and fishing hooks. These and countless other skills left us supremely adapted to decipher others’ movements, and reproduce them with our own muscles, tendons, and joints. Eons later, today’s movie stars demonstrate the same aptitude when imitating the movements of other primates, with a precision that no other species can match.

Human culture, having evolved over millennia, is not something that another species can easily pick up. We can rest assured that there will be no inter-primate war on Earth. For that to happen, another species would have to undergo a similarly prolonged evolutionary journey. And the only real warmongering ape on the planet seems hell-bent on preventing that.

Kevin Laland is Professor of Behavioral and Evolutionary Biology at the University of St Andrews, UK, and author of Darwin’s Unfinished Symphony: How Culture Made the Human Mind.

By Kevin Laland

The New Socialism of Fools

BERKELEY – According to mainstream economic theory, globalization tends to “lift all boats,” and has little effect on the broad distribution of incomes. But “globalization” is not the same as the elimination of tariffs and other import barriers that confer rent-seeking advantages to politically influential domestic producers. As Harvard University economist Dani Rodrik frequently points out, economic theory predicts that removing tariffs and non-tariff barriers does produce net gains; but it also results in large redistributions, wherein eliminating smaller barriers yields larger redistributions relative to the net gains.


Globalization, for our purposes, is different. It should be understood as a process in which the world becomes increasingly interconnected through technological advances that drive down transportation and communication costs.

To be sure, this form of globalization allows foreign producers to export goods and services to distant markets at a lower cost. But it also opens up export markets and reduces costs for the other side. And at the end of the day, consumers get more stuff for less.

According to standard economic theory, redistribution only comes about when a country’s exports require vastly different factors of production than its imports. But there are no such differences in today’s global economy.

In the United States, a balance-of-payments surplus in finance means that more Americans will be employed as construction workers, capital-goods producers, and nurses and home health aides. Similarly, a surplus in services means that more Americans will work not only as highly educated (and well-remunerated) consultants in steel-and-glass eyries, but also as, say, janitors and housekeepers in motels outside of Yellowstone National Park.

At the same time, a deficit in manufacturing may create more manufacturing jobs abroad, in countries where labor costs are low relative to capital; but it destroys relatively few jobs in the US, where manufacturing is already a highly capital-intensive industry. As Stanford University economist Robert Hall has been pointing out for three decades, more Americans are employed selling cars than making them. The commodities that the US imports from abroad embody a significant amount of relatively unskilled labor, but they do not displace much unskilled labor in America.

So, at least in theory, the shift in US employment from assembly-line manufacturing to construction, services, and caretaking may have had an impact on the overall distribution of income in terms of gender, but not in terms of class. Why, then, has there been such strong political resistance to globalization in the twenty-first century? I see four reasons.

First and foremost, it is easy for politicians to pin the blame for a country’s problems on foreigners and immigrants who do not vote. Back in 1890, when politicians in the Habsburg Empire routinely blamed Jews for various socioeconomic ills, the Austrian dissident Ferdinand Kronawetter famously observed that “Der Antisemitismus ist der Sozialismus der dummen Kerle”: anti-Semitism is the socialism of fools. The same could be said of anti-globalization today.

Second, more than a generation of inequitable and slower-than-expected economic growth in the global North has created a strong political and psychological need for scapegoats. People want a simple narrative to explain why they are missing out on the prosperity they were once promised, and why there is such a large and growing gap between an increasingly wealthy overclass and everyone else.

Third, China’s economic rise coincided with a period in which the global North was struggling to reach full employment. Contrary to what the followers of Friedrich von Hayek and Andrew Mellon have always claimed, economic readjustments do not happen when bankruptcies force labor and capital out of low-productivity, low-demand industries, but rather when booms pull labor and capital into high-productivity, high-demand industries.

Thus, neoliberalism does not just require open and competitive markets, global change, and price stability. It also depends on full employment and near-permanent booms, just as economist John Maynard Keynes had warned in the 1920s and 1930s. In recent decades, the neoliberal order failed to deliver either condition, most likely because doing so would have been impossible even with the best policies in place.

Fourth, policymakers did not do enough to compensate for this failure with more aggressive social policies and economic and geographic redistribution. When US President Donald Trump recently told upstate New Yorkers that they should leave the region and seek jobs elsewhere, he was simply echoing the past generation of center-right politicians in the global North.

The global North’s current political and economic dilemmas are not so different from those of the 1920s and 1930s. As Keynes noted then, the key is to produce and maintain full employment, at which point most other problems will melt away.

And, as the Austro-Hungarian economist Karl Polanyi argued, it is the role of government to secure socioeconomic rights. People believe that they have a right to live in healthy communities, hold stable occupations, and earn a decent income that rises over time. But these presumed rights do not stem naturally from property rights and claims to scarce resources – the coins of the neoliberal realm.

It has been ten years since the global financial crisis and the start of the “Great Recession” in the global North. Governments still have not repaired the damage from those events. If they do not do so soon, the “-isms” of fools will continue to wreak havoc in the decades ahead.

J. Bradford DeLong, a former deputy assistant US Treasury secretary, is Professor of Economics at the University of California at Berkeley and a research associate at the National Bureau of Economic Research.

By J. Bradford DeLong

America’s Dangerous Anti-Iran Posturing

NEW YORK – In recent weeks, US President Donald Trump and his advisers have joined Saudi Arabia in accusing Iran of being the epicenter of Middle East terrorism. The US Congress, meanwhile, is readying yet another round of sanctions against Iran. But the caricature of Iran as “the tip of the spear” of global terrorism, in Saudi King Salman’s words, is not only wrongheaded, but also extremely dangerous, because it could lead to yet another Middle East war.


In fact, that seems to be the goal of some US hotheads, despite the obvious fact that Iran is on the same side as the United States in opposing the Islamic State (ISIS). And then there’s the fact that Iran, unlike most of its regional adversaries, is a functioning democracy. Ironically, the escalation of US and Saudi rhetoric came just two days after Iran’s May 19 election, in which moderates led by incumbent President Hassan Rouhani defeated their hardline opponents at the ballot box.

Perhaps for Trump, the pro-Saudi, anti-Iran embrace is just another business proposition. He beamed at Saudi Arabia’s decision to buy $110 billion of new US weapons, describing the deal as “jobs, jobs, jobs,” as if the only gainful employment for American workers requires them to stoke war. And who knows what private deals for Trump and his family might also be lurking in his warm embrace of Saudi belligerence.

The Trump administration’s bombast toward Iran is, in a sense, par for the course. US foreign policy is littered with absurd, tragic, and hugely destructive foreign wars that served no real purpose except the pursuit of some misguided strand of official propaganda. How else, in the end, to explain America’s useless and hugely costly entanglements in Vietnam, Afghanistan, Iraq, Syria, Libya, Yemen, and many other conflicts?

America’s anti-Iran animus goes back to the country’s 1979 Islamic Revolution. For the US public, the 444-day ordeal of the US embassy staff held hostage by radical Iranian students constituted a psychological shock that has still not abated. The hostage drama dominated the US media from start to finish, resulting in a kind of public post-traumatic stress disorder similar to the social trauma of the 9/11 attacks a generation later.

For most Americans, then and now, the hostage crisis – and indeed the Iranian Revolution itself – was a bolt out of the blue. Few Americans realize that the Iranian Revolution came a quarter-century after the CIA and Britain’s intelligence agency MI6 conspired in 1953 to overthrow the country’s democratically elected government and install a police state under the Shah of Iran, to preserve Anglo-American control over Iran’s oil, which was threatened by nationalization. Nor do most Americans realize that the hostage crisis was precipitated by the ill-considered decision to admit the deposed Shah into the US for medical treatment, which many Iranians viewed as a threat to the revolution.

During the Reagan Administration, the US supported Iraq in its war of aggression against Iran, including Iraq’s use of chemical weapons. When the fighting finally ended in 1988, the US followed up with financial and trade sanctions on Iran that remain in place to this day. Since 1953, the US has opposed Iran’s self-rule and economic development through covert action, support for authoritarian rule during 1953-79, military backing for its enemies, and decades-long sanctions.

Another reason for America’s anti-Iran animus is Iran’s support for Hezbollah and Hamas, two militant antagonists of Israel. Here, too, it is important to understand the historical context.

In 1982, Israel invaded Lebanon in an attempt to crush militant Palestinians operating there. In the wake of that war, and against the backdrop of anti-Muslim massacres enabled by Israel’s occupation forces, Iran supported the formation of the Shia-led Hezbollah to resist Israel’s occupation of southern Lebanon. By the time Israel withdrew from Lebanon in 2000, nearly 20 years after its original invasion, Hezbollah had become a formidable military, political, and social force in Lebanon, and a continuing thorn in Israel’s side.

Iran also supports Hamas, a hardline Sunni group that rejects Israel’s right to exist. Following decades of Israeli occupation of Palestinian lands captured in the 1967 war, and with peace negotiations stalemated, Hamas defeated Fatah (the Palestine Liberation Organization’s political party) at the ballot box in the 2006 election for the Palestinian parliament. Rather than entering into a dialogue with Hamas, the US and Israel decided to try to crush it, including through a brutal war in Gaza in 2014, resulting in a massive Palestinian death toll, untold suffering, and billions of dollars in damage to homes and infrastructure in Gaza – but, predictably, leading to no political progress whatsoever.

Israel also views Iran’s nuclear program as an existential threat. Hardline Israelis repeatedly sought to convince the US to attack Iran’s nuclear facilities, or at least allow Israel to do so. Fortunately, President Barack Obama resisted, and instead negotiated a treaty between Iran and the five permanent members of the United Nations Security Council (plus Germany) that blocks Iran’s path to nuclear weapons for a decade or more, creating space for further confidence-building measures on both sides. Yet Trump and the Saudis seem intent on destroying the possibility of normalizing relations created by this important and promising agreement.

External powers are extremely foolish to allow themselves to be manipulated into taking sides in bitter national or sectarian conflicts that can be resolved only by compromise. The Israel-Palestine conflict, the competition between Saudi Arabia and Iran, and the Sunni-Shia relationship all require mutual accommodation. Yet each side in these conflicts harbors the tragic illusion of achieving an ultimate victory without the need to compromise, if only the US (or some other major power) will fight the war on its behalf.

During the past century, Britain, France, the US, and Russia have all misplayed the Middle East power game. All have squandered lives, money, and prestige. (Indeed, the Soviet Union was gravely, perhaps fatally, weakened by its war in Afghanistan.) More than ever, we need an era of diplomacy that emphasizes compromise, not another round of demonization and an arms race that could all too easily spiral into disaster.

Jeffrey D. Sachs, Professor of Sustainable Development and Professor of Health Policy and Management at Columbia University, is Director of Columbia’s Center for Sustainable Development and the UN Sustainable Development Solutions Network.

By Jeffrey D. Sachs

Recovery is Not Resolution

CAMBRIDGE – Earlier this year, the consensus view among economists was that the United States would outstrip its advanced-economy rivals. The expected US growth spurt would be driven by the economic stimulus package described in President Donald Trump’s election campaign. But the most notable positive economic news of 2017 among the developed countries has been coming from Europe.


Last week, the International Monetary Fund revised upward its growth projections for the eurozone, with the more favorable outlook extending broadly across member countries and including the Big Four: Germany, France, Italy, and Spain. IMF Chief Economist Maurice Obstfeld characterized recent developments in the global economy as a “firming recovery.” Growth is also expected to pick up in Asia’s advanced economies, including Japan.

As I noted in a previous commentary, Iceland, where the financial crisis dates to 2007, has already been dealing with a fresh wave of capital inflows for some time, leading to concerns about potential overheating. A few days ago, Greece, the most battered of Europe’s crisis countries, was able to tap global financial markets for the first time in years. With a yield of more than 4.6%, Greece’s bonds were enthusiastically snapped up by institutional investors.

Greek and European officials hailed the bond sale as a milestone for a country that had lost access to global capital markets back in 2010. Greek Prime Minister Alexis Tsipras said the debt issue was a sign that his country is on the path to a definitive end to its prolonged crisis.

In the US, the Federal Reserve’s ongoing exit from ultra-easy post-crisis monetary policy adds to the sense among market participants and other countries’ policymakers that normal times are returning.

But are they? Do recent positive developments in the advanced countries, which were at the epicenter of the global financial crisis of 2008, mean that the brutal aftermath of that crisis is finally over?

Good news notwithstanding, declaring victory at this stage (even a decade later) appears premature. Recovery is not the same as resolution. It may be instructive to recall that in other protracted post-crisis episodes, including the Great Depression of the 1930s, economic recovery without resolution of the fundamental problems of excessive leverage and weak banks usually proved shallow and difficult to sustain.

During the “lost decade” of the Latin American debt crisis in the 1980s, Brazil and Mexico had a significant and promising growth pickup in 1984-1985 – before serious problems in the banking sector, an unresolved external debt overhang, and several ill-advised domestic policy initiatives cut those recoveries short. The post-crisis legacy was finally shaken off only several years later with the restoration of fiscal sustainability, debt write-offs under the so-called Brady Plan, and a variety of domestic structural reforms.

Since its 1992 banking crisis, Japan has suffered several false starts. There were recoveries in 1995-1996 and again in 2000 and 2010; but they tended to be cut short by the failure to write down bad debt (the so-called zombie loans), several premature policy reversals, and an increasingly unsustainable accumulation of government debt.

The eurozone emerged from the financial crisis in 2008-2009 with some economic momentum. Unlike the Federal Reserve, however, the European Central Bank hiked interest rates in early 2011, which contributed to the region’s descent into a deeper crisis.

History, therefore, suggests caution before concluding that the current recovery has the makings of a more sustainable and broad-based variety. Many of the economic problems created or exacerbated by the crisis remain unresolved.

All of the advanced economies (to varying degrees) have significant legacy debts (public and private) from the excesses that set the stage for the financial crisis, as well as from the prolonged impact of the crisis on the real economy. Low interest rates have eased the burden of those debts (in effect, negative real interest rates are a tax on bondholders), but rates are on the rise.

Political polarization in the US and the United Kingdom is at or near historic highs, depending on the measure used. As a result, many critical but politically sensitive policies to ensure future fiscal sustainability remain unresolved in both countries.

The UK’s withdrawal from the European Union – and Brexit’s medium-term impact on the British economy – is another source of risk that has yet to be tackled. How Japan will resolve its public and private debt overhang is yet to be determined. I have argued elsewhere that inflation will likely be part of that resolution, as it is improbable that an aging population will vote to raise its tax burden and reduce its benefits sufficiently to put Japan’s debt trajectory on a sustainable path.

In Europe, the high level of non-performing loans continues to act as a drag on economic growth, by inhibiting new credit creation. Furthermore, these bad assets pose a substantial contingent liability for some governments. Target2, the euro’s real-time gross settlement system, has emerged as the eurozone’s mechanism for financing the emergence of widening structural balance-of-payments gaps, whereby capital flows out of southern Europe into Germany. For Greece, Italy, Portugal, and Spain, public-sector debt must now also include the central bank’s sharply rising debts.

Perhaps the main lesson is that even more caution is warranted in deciding whether the time is ripe to “normalize” monetary policy. Even in the best of recovery scenarios, policymakers would be ill-advised to kick the can down the road on structural reforms and fiscal measures needed to mitigate risk premia.Carmen Reinhart is Professor of the International Financial System at Harvard University’s Kennedy School of Government.

By Carmen M. Reinhart

America’s Dangerous Anti-Iran Posturing

NEW YORK – In recent weeks, US President Donald Trump and his advisers have joined Saudi Arabia in accusing Iran of being the epicenter of Middle East terrorism. The US Congress, meanwhile, is readying yet another round of sanctions against Iran. But the caricature of Iran as “the tip of the spear” of global terrorism, in Saudi King Salman’s words, is not only wrongheaded, but also extremely dangerous, because it could lead to yet another Middle East war.


In fact, that seems to be the goal of some US hotheads, despite the obvious fact that Iran is on the same side as the United States in opposing the Islamic State (ISIS). And then there’s the fact that Iran, unlike most of its regional adversaries, is a functioning democracy. Ironically, the escalation of US and Saudi rhetoric came just two days after Iran’s May 19 election, in which moderates led by incumbent President Hassan Rouhani defeated their hardline opponents at the ballot box.

Perhaps for Trump, the pro-Saudi, anti-Iran embrace is just another business proposition. He beamed at Saudi Arabia’s decision to buy $110 billion of new US weapons, describing the deal as “jobs, jobs, jobs,” as if the only gainful employment for American workers requires them to stoke war. And who knows what private deals for Trump and his family might also be lurking in his warm embrace of Saudi belligerence.

The Trump administration’s bombast toward Iran is, in a sense, par for the course. US foreign policy is littered with absurd, tragic, and hugely destructive foreign wars that served no real purpose except the pursuit of some misguided strand of official propaganda. How else, in the end, to explain America’s useless and hugely costly entanglements in Vietnam, Afghanistan, Iraq, Syria, Libya, Yemen, and many other conflicts?

America’s anti-Iran animus goes back to the country’s 1979 Islamic Revolution. For the US public, the 444-day ordeal of the US embassy staff held hostage by radical Iranian students constituted a psychological shock that has still not abated. The hostage drama dominated the US media from start to finish, resulting in a kind of public post-traumatic stress disorder similar to the social trauma of the 9/11 attacks a generation later.

For most Americans, then and now, the hostage crisis – and indeed the Iranian Revolution itself – was a bolt out of the blue. Few Americans realize that the Iranian Revolution came a quarter-century after the CIA and Britain’s intelligence agency MI6 conspired in 1953 to overthrow the country’s democratically elected government and install a police state under the Shah of Iran, to preserve Anglo-American control over Iran’s oil, which was threatened by nationalization. Nor do most Americans realize that the hostage crisis was precipitated by the ill-considered decision to admit the deposed Shah into the US for medical treatment, which many Iranians viewed as a threat to the revolution.

During the Reagan Administration, the US supported Iraq in its war of aggression against Iran, including Iraq’s use of chemical weapons. When the fighting finally ended in 1988, the US followed up with financial and trade sanctions on Iran that remain in place to this day. Since 1953, the US has opposed Iran’s self-rule and economic development through covert action, support for authoritarian rule during 1953-79, military backing for its enemies, and decades-long sanctions.

Another reason for America’s anti-Iran animus is Iran’s support for Hezbollah and Hamas, two militant antagonists of Israel. Here, too, it is important to understand the historical context.

In 1982, Israel invaded Lebanon in an attempt to crush militant Palestinians operating there. In the wake of that war, and against the backdrop of anti-Muslim massacres enabled by Israel’s occupation forces, Iran supported the formation of the Shia-led Hezbollah to resist Israel’s occupation of southern Lebanon. By the time Israel withdrew from Lebanon in 2000, nearly 20 years after its original invasion, Hezbollah had become a formidable military, political, and social force in Lebanon, and a continuing thorn in Israel’s side.

Iran also supports Hamas, a hardline Sunni group that rejects Israel’s right to exist. Following decades of Israeli occupation of Palestinian lands captured in the 1967 war, and with peace negotiations stalemated, Hamas defeated Fatah (the Palestine Liberation Organization’s political party) at the ballot box in the 2006 election for the Palestinian parliament. Rather than entering into a dialogue with Hamas, the US and Israel decided to try to crush it, including through a brutal war in Gaza in 2014, resulting in a massive Palestinian death toll, untold suffering, and billions of dollars in damage to homes and infrastructure in Gaza – but, predictably, leading to no political progress whatsoever.

Israel also views Iran’s nuclear program as an existential threat. Hardline Israelis repeatedly sought to convince the US to attack Iran’s nuclear facilities, or at least allow Israel to do so. Fortunately, President Barack Obama resisted, and instead negotiated a treaty between Iran and the five permanent members of the United Nations Security Council (plus Germany) that blocks Iran’s path to nuclear weapons for a decade or more, creating space for further confidence-building measures on both sides. Yet Trump and the Saudis seem intent on destroying the possibility of normalizing relations created by this important and promising agreement.

External powers are extremely foolish to allow themselves to be manipulated into taking sides in bitter national or sectarian conflicts that can be resolved only by compromise. The Israel-Palestine conflict, the competition between Saudi Arabia and Iran, and the Sunni-Shia relationship all require mutual accommodation. Yet each side in these conflicts harbors the tragic illusion of achieving an ultimate victory without the need to compromise, if only the US (or some other major power) will fight the war on its behalf.

During the past century, Britain, France, the US, and Russia have all misplayed the Middle East power game. All have squandered lives, money, and prestige. (Indeed, the Soviet Union was gravely, perhaps fatally, weakened by its war in Afghanistan.) More than ever, we need an era of diplomacy that emphasizes compromise, not another round of demonization and an arms race that could all too easily spiral into disaster.

Jeffrey D. Sachs, Professor of Sustainable Development and Professor of Health Policy and Management at Columbia University, is Director of Columbia’s Center for Sustainable Development and the UN Sustainable Development Solutions Network.

Ten Lessons from North Korea’s Nuclear Program

SEOUL – North Korea has produced a number of nuclear warheads and is developing ballistic missiles capable of delivering them around the world. Many governments are debating how to prevent or slow further advances in North Korea’s capacity and what should be done if such efforts fail.


These are obviously important questions, but they are not the only ones. It also is important to understand how North Korea has succeeded in advancing its nuclear and missile programs as far as it has, despite decades of international efforts. It may be too late to affect North Korea’s trajectory decisively; but it is not too late to learn from the experience. What follows are ten lessons that we ignore at our peril.

First, a government that possesses basic scientific knowhow and modern manufacturing capability, and is determined to develop a number of rudimentary nuclear weapons, will most likely succeed, sooner or later. Much of the relevant information is widely available.

Second, help from the outside can be discouraged and limited but not shut down. Black markets exist any time there is a profit to be made. Certain governments will facilitate such markets, despite their obligation not to do so.

Third, there are limits to what economic sanctions can be expected to accomplish. Although sanctions may increase the cost of producing nuclear weapons, history suggests that governments are willing to pay a significant price if they place a high enough value on having them. There is also evidence that some or all of the sanctions will eventually disappear, as other governments come to accept the reality of a country’s nuclear status and choose to focus on other objectives. That is what happened in the case of India.

Fourth, governments are not always willing to put global considerations (in this case, opposition to nuclear proliferation) ahead of what they see as their immediate strategic interests. China opposes proliferation, but not as much as it wants to maintain a divided Korean Peninsula and ensure that North Korea remains a stable buffer state on its borders. This limits any economic pressure China is prepared to place on North Korea over its nuclear efforts. The United States opposed Pakistan’s development of nuclear weapons, but was slow to act, owing to its desire in the 1980s for Pakistani support in fighting the Soviet Union’s occupation of Afghanistan.

Fifth, some three quarters of a century since they were first and last used, and a quarter-century after the Cold War’s end, nuclear weapons are judged to have value. This calculation is based on security more than prestige.

Decades ago, Israel made such a calculation in the face of Arab threats to eliminate the Jewish state. More recently, Ukraine, Libya, and Iraq all gave up their nuclear weapons programs either voluntarily or under pressure. Subsequently, Ukraine was invaded by Russia, Iraq by the US, and Libya by the US and several of its European partners. Saddam Hussein in Iraq and Muammar el-Qaddafi in Libya were ousted.

North Korea has avoided such a fate, and the third generation of the Kim family rules with an iron fist. It is doubtful that the lesson is lost on Kim Jong-un.

Sixth, the Non-Proliferation Treaty – the 1970 accord that underpins global efforts to discourage the spread of nuclear weapons beyond the five countries (the US, Russia, China, the United Kingdom, and France) that are recognized as legitimate nuclear weapons states for an unspecified but limited period of time – is inadequate. The NPT is a voluntary agreement. Countries are not obliged to sign it, and they may withdraw from it, with no penalty, if they change their mind. Inspections meant to confirm compliance are conducted largely on the basis of information provided by host governments, which have been known not to reveal all.

Seventh, new diplomatic efforts, like the recent ban on all nuclear weapons organized by the United Nations General Assembly, will have no discernable effect. Such pacts are the modern-day equivalent of the 1928 Kellogg-Briand Pact, which outlawed war.

Eighth, there is a major gap in the international system. There is a clear norm against the spread of nuclear weapons, but there is no consensus or treaty on what, if anything is to be done once a country develops or acquires nuclear weapons. The legally and diplomatically controversial options of preventive strikes (against a gathering threat) and preemptive strikes (against an imminent threat) make them easier to propose than to implement.

Ninth, the alternatives for dealing with nuclear proliferation do not improve with the passage of time. In the early 1990s, the US considered using military force to nip the North Korean program in the bud, but held off for fear of triggering a second Korean War. That remains the case today, when any force used would need to be much larger in scope and uncertain to succeed.

Finally, not every problem can be solved. Some can only be managed. It is much too soon, for example, to conclude that Iran will not one day develop nuclear weapons. The 2015 accord delayed that risk, but by no means eliminated it. It remains to be seen what can be done vis-à-vis North Korea. Managing such challenges may not be satisfying, but often it is the most that can be hoped for.

Richard N. Haass is President of the Council on Foreign Relations and the author, most recently, of A World in Disarray: American Foreign Policy and the Crisis of the Old Order.

By Richard N. Haass

When Populism Can Kill

LONDON – Unfounded skepticism about vaccines in some communities, in developing and developed countries alike, has emerged in recent years as one of the most serious impediments to global progress in public health. Indeed, it is one of the primary reasons why eradicable infectious diseases persist today.


For example, the effort to eradicate polio worldwide has been disrupted in Afghanistan, Pakistan, and Nigeria, where rule by Islamist militants has led to increased resistance against vaccination campaigns. And many high-income countries have experienced measles outbreaks in recent years, owing to fears about vaccinations that began with the publication of a fraudulent paper in the British medical journal The Lancet in 1998.

More recently, skepticism about vaccine safety and efficacy has been on the rise in Southern Europe. According to a 2016 study, Greece is now among the top ten countries worldwide with the lowest confidence in vaccine safety. And, as Greek Minister of Health Andreas Xanthos has noted, health-care professionals are increasingly encountering parents who have fears about vaccinating their children.

Similarly, in Italy, Minister of Health Beatrice Lorenzin recently warned of a “fake news” campaign, backed by the opposition Five Star Movement, to dissuade parents from vaccinating their children. Already, the share of Italian two-year-olds who have been inoculated against measles is under 80%, well below the World Health Organization’s recommended threshold of 95%. So it should come as no surprise that Italy had five times more measles cases in April of this year than it did in April 2016.

In May, Greece and Italy each enacted very different policies to respond to vaccine skepticism. In Greece, despite the fact that child vaccination has been mandatory since 1999 (unless a child has a certified medical condition), Xanthos has advocated an opt-out option for parents who do not want to vaccinate their children.

By contrast, Italy’s center-left Democratic Party government has made vaccinations against 12 preventable diseases compulsory for all children. Under a new law, unvaccinated children are not permitted to attend school, and parents of unvaccinated children can be fined for their children’s non-attendance. According to Lorenzin, the law is meant to send “a very strong message to the public” about the importance of inoculation.

In other words, two left-wing governments have responded to the same public health problem in very different ways. Whereas Greece moved from paternalism to laissez faire, Italy moved in the opposite direction.

The decision by Greece’s Syriza-led government is surely the stranger of the two, given that Syriza tends to favor robust state intervention in most other policy areas. In Italy, the government is responding to the populist Five Star Movement’s anti-vaccination agenda, which has become a part of its broader campaign against the state, established political parties, and the “experts” responsible for the 2008 financial crisis and the eurozone’s prolonged economic malaise.

But, putting politics aside, there are compelling reasons for why governments should mandate vaccinations for all children, rather than leaving it up to parents to decide. Ultimately, the state has a responsibility to protect vulnerable individuals – in this case young children – from foreseeable harm.

In 1990, Greece signed the United Nations Convention on the Rights of the Child, in which it recognized all children’s right to “the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health.” But by allowing misinformed parents to forego vaccinations, Greece is exposing children to preventable infectious diseases and openly violating its pledge to ensure “that no child is deprived of his or her right of access to such health-care services.”

Moreover, governments have a responsibility to establish public goods through legislation, and “herd immunity” is one such good. Herd immunity describes a level of vaccination coverage that is high enough to prevent a disease from spreading through the population. Achieving herd immunity is one of the only ways to protect vulnerable members of a community who cannot be vaccinated because they are immunocompromised, or simply too old.

In addition, vaccination is a crucial instrument in the fight against one of the twenty-first century’s biggest health challenges: antimicrobial resistance. By preventing infections, vaccines also prevent overuse of antibiotics, thereby slowing down the development of drug resistance. More generally, it is widely known that high vaccination coverage results in a healthier population, and that healthier people can contribute more, both economically and socially, to their communities.

No medical or technical obstacles are blocking us from eradicating preventable infectious diseases such as measles and polio. Rather, the biggest hurdle has been popular resistance to vaccination. By allowing parents to make uninformed decisions about the health of not just their own children, but their entire community, the Syriza government is only adding to the problem. Governments should be educating the public to improve overall coverage, not validating unfounded fears about vaccine safety.

No country can achieve herd immunity – and eventually eradicate preventable infectious diseases – if it allows parents to opt out of vaccinating their children, as in Greece. But it also will not do simply to sanction noncompliant parents, as in Italy. Ultimately, to defeat infectious diseases, we will have to restore faith in expertise, and rebuild trust with communities that have grown increasingly suspicious of authority in recent years.

Domna Michailidou works for the Economics Department of the OECD and teaches at the Center for Development Studies at the University of Cambridge and the UCL School of Public Policy. Jonathan Kennedy teaches at the UCL School of Public Policy and is a research associate in the Department of Sociology at the University of Cambridge.

By Domna Michaildou and Jonathan Kennedy

When Populism Can Kill

LONDON – Unfounded skepticism about vaccines in some communities, in developing and developed countries alike, has emerged in recent years as one of the most serious impediments to global progress in public health. Indeed, it is one of the primary reasons why eradicable infectious diseases persist today.


For example, the effort to eradicate polio worldwide has been disrupted in Afghanistan, Pakistan, and Nigeria, where rule by Islamist militants has led to increased resistance against vaccination campaigns. And many high-income countries have experienced measles outbreaks in recent years, owing to fears about vaccinations that began with the publication of a fraudulent paper in the British medical journal The Lancet in 1998.

More recently, skepticism about vaccine safety and efficacy has been on the rise in Southern Europe. According to a 2016 study, Greece is now among the top ten countries worldwide with the lowest confidence in vaccine safety. And, as Greek Minister of Health Andreas Xanthos has noted, health-care professionals are increasingly encountering parents who have fears about vaccinating their children.

Similarly, in Italy, Minister of Health Beatrice Lorenzin recently warned of a “fake news” campaign, backed by the opposition Five Star Movement, to dissuade parents from vaccinating their children. Already, the share of Italian two-year-olds who have been inoculated against measles is under 80%, well below the World Health Organization’s recommended threshold of 95%. So it should come as no surprise that Italy had five times more measles cases in April of this year than it did in April 2016.

In May, Greece and Italy each enacted very different policies to respond to vaccine skepticism. In Greece, despite the fact that child vaccination has been mandatory since 1999 (unless a child has a certified medical condition), Xanthos has advocated an opt-out option for parents who do not want to vaccinate their children.

By contrast, Italy’s center-left Democratic Party government has made vaccinations against 12 preventable diseases compulsory for all children. Under a new law, unvaccinated children are not permitted to attend school, and parents of unvaccinated children can be fined for their children’s non-attendance. According to Lorenzin, the law is meant to send “a very strong message to the public” about the importance of inoculation.

In other words, two left-wing governments have responded to the same public health problem in very different ways. Whereas Greece moved from paternalism to laissez faire, Italy moved in the opposite direction.

The decision by Greece’s Syriza-led government is surely the stranger of the two, given that Syriza tends to favor robust state intervention in most other policy areas. In Italy, the government is responding to the populist Five Star Movement’s anti-vaccination agenda, which has become a part of its broader campaign against the state, established political parties, and the “experts” responsible for the 2008 financial crisis and the eurozone’s prolonged economic malaise.

But, putting politics aside, there are compelling reasons for why governments should mandate vaccinations for all children, rather than leaving it up to parents to decide. Ultimately, the state has a responsibility to protect vulnerable individuals – in this case young children – from foreseeable harm.

In 1990, Greece signed the United Nations Convention on the Rights of the Child, in which it recognized all children’s right to “the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health.” But by allowing misinformed parents to forego vaccinations, Greece is exposing children to preventable infectious diseases and openly violating its pledge to ensure “that no child is deprived of his or her right of access to such health-care services.”

Moreover, governments have a responsibility to establish public goods through legislation, and “herd immunity” is one such good. Herd immunity describes a level of vaccination coverage that is high enough to prevent a disease from spreading through the population. Achieving herd immunity is one of the only ways to protect vulnerable members of a community who cannot be vaccinated because they are immunocompromised, or simply too old.

In addition, vaccination is a crucial instrument in the fight against one of the twenty-first century’s biggest health challenges: antimicrobial resistance. By preventing infections, vaccines also prevent overuse of antibiotics, thereby slowing down the development of drug resistance. More generally, it is widely known that high vaccination coverage results in a healthier population, and that healthier people can contribute more, both economically and socially, to their communities.

No medical or technical obstacles are blocking us from eradicating preventable infectious diseases such as measles and polio. Rather, the biggest hurdle has been popular resistance to vaccination. By allowing parents to make uninformed decisions about the health of not just their own children, but their entire community, the Syriza government is only adding to the problem. Governments should be educating the public to improve overall coverage, not validating unfounded fears about vaccine safety.

No country can achieve herd immunity – and eventually eradicate preventable infectious diseases – if it allows parents to opt out of vaccinating their children, as in Greece. But it also will not do simply to sanction noncompliant parents, as in Italy. Ultimately, to defeat infectious diseases, we will have to restore faith in expertise, and rebuild trust with communities that have grown increasingly suspicious of authority in recent years.

Domna Michailidou works for the Economics Department of the OECD and teaches at the Center for Development Studies at the University of Cambridge and the UCL School of Public Policy. Jonathan Kennedy teaches at the UCL School of Public Policy and is a research associate in the Department of Sociology at the University of Cambridge.

By Domna Michaildou and Jonathan Kennedy

When Populism Can Kill

LONDON – Unfounded skepticism about vaccines in some communities, in developing and developed countries alike, has emerged in recent years as one of the most serious impediments to global progress in public health. Indeed, it is one of the primary reasons why eradicable infectious diseases persist today.


For example, the effort to eradicate polio worldwide has been disrupted in Afghanistan, Pakistan, and Nigeria, where rule by Islamist militants has led to increased resistance against vaccination campaigns. And many high-income countries have experienced measles outbreaks in recent years, owing to fears about vaccinations that began with the publication of a fraudulent paper in the British medical journal The Lancet in 1998.

More recently, skepticism about vaccine safety and efficacy has been on the rise in Southern Europe. According to a 2016 study, Greece is now among the top ten countries worldwide with the lowest confidence in vaccine safety. And, as Greek Minister of Health Andreas Xanthos has noted, health-care professionals are increasingly encountering parents who have fears about vaccinating their children.

Similarly, in Italy, Minister of Health Beatrice Lorenzin recently warned of a “fake news” campaign, backed by the opposition Five Star Movement, to dissuade parents from vaccinating their children. Already, the share of Italian two-year-olds who have been inoculated against measles is under 80%, well below the World Health Organization’s recommended threshold of 95%. So it should come as no surprise that Italy had five times more measles cases in April of this year than it did in April 2016.

In May, Greece and Italy each enacted very different policies to respond to vaccine skepticism. In Greece, despite the fact that child vaccination has been mandatory since 1999 (unless a child has a certified medical condition), Xanthos has advocated an opt-out option for parents who do not want to vaccinate their children.

By contrast, Italy’s center-left Democratic Party government has made vaccinations against 12 preventable diseases compulsory for all children. Under a new law, unvaccinated children are not permitted to attend school, and parents of unvaccinated children can be fined for their children’s non-attendance. According to Lorenzin, the law is meant to send “a very strong message to the public” about the importance of inoculation.

In other words, two left-wing governments have responded to the same public health problem in very different ways. Whereas Greece moved from paternalism to laissez faire, Italy moved in the opposite direction.

The decision by Greece’s Syriza-led government is surely the stranger of the two, given that Syriza tends to favor robust state intervention in most other policy areas. In Italy, the government is responding to the populist Five Star Movement’s anti-vaccination agenda, which has become a part of its broader campaign against the state, established political parties, and the “experts” responsible for the 2008 financial crisis and the eurozone’s prolonged economic malaise.

But, putting politics aside, there are compelling reasons for why governments should mandate vaccinations for all children, rather than leaving it up to parents to decide. Ultimately, the state has a responsibility to protect vulnerable individuals – in this case young children – from foreseeable harm.

In 1990, Greece signed the United Nations Convention on the Rights of the Child, in which it recognized all children’s right to “the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health.” But by allowing misinformed parents to forego vaccinations, Greece is exposing children to preventable infectious diseases and openly violating its pledge to ensure “that no child is deprived of his or her right of access to such health-care services.”

Moreover, governments have a responsibility to establish public goods through legislation, and “herd immunity” is one such good. Herd immunity describes a level of vaccination coverage that is high enough to prevent a disease from spreading through the population. Achieving herd immunity is one of the only ways to protect vulnerable members of a community who cannot be vaccinated because they are immunocompromised, or simply too old.

In addition, vaccination is a crucial instrument in the fight against one of the twenty-first century’s biggest health challenges: antimicrobial resistance. By preventing infections, vaccines also prevent overuse of antibiotics, thereby slowing down the development of drug resistance. More generally, it is widely known that high vaccination coverage results in a healthier population, and that healthier people can contribute more, both economically and socially, to their communities.

No medical or technical obstacles are blocking us from eradicating preventable infectious diseases such as measles and polio. Rather, the biggest hurdle has been popular resistance to vaccination. By allowing parents to make uninformed decisions about the health of not just their own children, but their entire community, the Syriza government is only adding to the problem. Governments should be educating the public to improve overall coverage, not validating unfounded fears about vaccine safety.

No country can achieve herd immunity – and eventually eradicate preventable infectious diseases – if it allows parents to opt out of vaccinating their children, as in Greece. But it also will not do simply to sanction noncompliant parents, as in Italy. Ultimately, to defeat infectious diseases, we will have to restore faith in expertise, and rebuild trust with communities that have grown increasingly suspicious of authority in recent years.

Domna Michailidou works for the Economics Department of the OECD and teaches at the Center for Development Studies at the University of Cambridge and the UCL School of Public Policy. Jonathan Kennedy teaches at the UCL School of Public Policy and is a research associate in the Department of Sociology at the University of Cambridge.

By Domna Michaildou and Jonathan Kennedy

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…