Opinion

The Moral Urgency of Mental Health

PRINCETON – If we can prevent great suffering at no cost to ourselves, we ought to do so. That principle is widely accepted and difficult to dispute. Yet Western governments are neglecting an opportunity to reduce the great misery caused by mental illness, even though the net cost would be nil.


The evidence for this claim comes from recent research by a team of economists at the London School of Economics. The team, directed by Richard Layard, drew on data from four major developed countries (Australia, Britain, Germany, and the United States) in which people were asked to indicate, on a 0-10 scale, how satisfied they were with their life.

The researchers refer to those in the bottom 10% of the population in terms of life satisfaction as being in “misery.” Respondents also answered other questions designed to indicate factors affecting life satisfaction.

When Layard’s team analyzed the results, they found that the biggest factors affecting misery were all non-economic: mental health, physical health, and whether someone had a partner. Mental health was the biggest predictor of all; it explained twice as much of the difference between people in terms of life satisfaction as physical health or income inequality did. (This was also true for those in the non-miserable 90% of the population).

Overall, the researchers claim, eliminating depression and anxiety would reduce misery by 20%, whereas eliminating poverty would reduce it by just 5%. If we want to reduce misery in the developed world, then mental health is the biggest challenge we need to overcome.

Many people will find this result surprising. After all, most of us expect that we would become happier if we were richer. So why is mental health, not poverty, the biggest cause of misery?

The answer is that people adapt to higher levels of income over time – a phenomenon known as “hedonic adaptation” – and they compare their income to that of their peers. This gives rise to the so-called Easterlin Paradox, the finding that although richer people are more satisfied with their lives than poorer people, economic growth has often not increased overall life satisfaction in the developed world. If your neighbor becomes richer, you feel poorer. If both of you become richer, neither of you is likely to be significantly happier. In contrast, people do not adapt to poor mental health; nor does your neighbor’s misery make you feel better.

Given that mental health has the greatest impact on life satisfaction, we still need to ask if addressing it is the most cost-effective way for governments to reduce misery. Layard and his colleagues asked how much the British government would have to spend to tackle mental health, physical health, unemployment, or poverty. They concluded that mental health would be the cheapest of the four options: around 18 times more cost-effective in reducing misery and promoting happiness than targeting poverty.

In the United Kingdom, it costs about £650 per patient to provide psychotherapy, which is effective for about 50% of patients. That figure indicates how much governments would need to spend, but does not take into account what they might get back.

Reducing mental illness enables many people to return to work, thereby reducing the cost of unemployment benefits while increasing tax receipts. Hence Layard and his colleagues hypothesized that treating mental health would pay for itself. In effect, the UK government would be able to reduce misery at no cost.

Further economic research, this time by Paul Frijters and his colleagues, also of the LSE, has now assessed the impact of the UK’s Improving Access to Psychological Therapies program, a scheme conceived by Layard and the psychologist David Clark, which was launched in 2008. They conclude that the increase in tax receipts and the reduction in unemployment benefits pay back only about 20% of the cost of treating mental illness. However, they argue that mental health treatment still funds itself because those who receive psychotherapy demand far fewer physical health services.

In reality, the UK didn’t shrink its health budget. The effect of treating mental health therefore was to free up other resources that were used for other patients. But this effect was so substantial that Frijters claims we could expand treatment to all 12% of the UK population who have mild to moderate anxiety or depression and expect the investment to pay for itself in savings in no more than two or three years.

Attitudes toward mental health have changed dramatically in the last few years; even princes and athletes now feel able to open up about it. In the UK, a study showed that mental illness affects one in four people in any year, while research carried out in 30 European countries found that 38% of the population suffered from some kind of mental or neurological illness. What has not been grasped is that this suffering is largely avoidable.

Governments are starting to regard mental health as seriously as they do physical health. But they could do much more. Increasing their spending on mental health could reduce an immense amount of misery – and at no cost in the long run.

Of course, some mental illnesses are more difficult to treat than moderate depression or anxiety, and at some point higher spending may not pay for itself. But until that point has been reached we should all agree on the moral urgency of a radical expansion in funding for mental health.

Michael Plant is a Ph.D. student in philosophy at the University of Oxford. His research centers on finding the best ways to improve worldwide happiness. Peter Singer is Professor of Bioethics at Princeton University and Laureate Professor at the University of Melbourne. His books include Practical Ethics, The Most Good You Can Do, One World Now, and Ethics in the Real World.

By Michael Plant and Peter Singer

Saudi Arabia’s Populist Temptation

NEW YORK – Most efforts to comprehend the dynamics of Saudi Arabia’s ongoing political earthquake have focused on the psychology of the young crown prince, Mohammed bin Salman. But there are also structural reasons for Prince Mohammed’s brand of populism. Understanding these factors is key to finding a better path forward.


In the past, political stability in Saudi Arabia rested on three separate deals: within the royal family; between the royal family and the Kingdom’s traditional elites; and between the state and the population.

The deal within the Al Saud family is rooted in asabiyya – the ability of an ambitious tribe to stick together to monopolize power. But the royal family has grown too large and become too divided to justify the cost of maintaining its unity. Loosely estimated, the 5,000 or so third-generation princes and their entourage consume $30-50 billion per year.

The deal among traditional elites is also rooted in the Kingdom’s genesis. These notable families were encouraged to accumulate economic power. Privileged access to government contracts, subsidies, capital, protection from competition, and the ability to import labor freely have embedded their companies deeply in the economy.

This protected elite private sector grew to represent over 50% of Saudi GDP. But, because it is largely staffed by expats, it generates no trickle-down benefits to the local population, only negative externalities.

The population, meanwhile, was offered economic security in exchange for loyalty – an arrangement institutionalized through a patronage network of high-paying public-sector jobs and a broad array of generous welfare benefits and consumer subsidies. As a result, more than 75% of Saudi citizens work for the state, and much of the rest of the public budget is spent on cradle-to-grave social support.

But with per capita revenue from oil exports now only $5,000 a year for Saudi Arabia’s 20 million nationals, the system has become too costly. The challenge for Prince Mohammed is to oversee a transition to a less expensive political order, while generating sufficient economic efficiency gains to prevent the necessary adjustment from fueling instability and civil unrest.

Other autocratic regimes in the region, with larger populations and less oil – such as Iraq, Egypt, Algeria, and Syria – followed a “republican strategy” that appeased the poor with various forms of patronage, and repressed economic elites. This blocked the rise of any credible opposition, at the cost of entrenching an anemic, largely informal, and consumption-based economy.

Such a Venezuela-style approach could appeal to Prince Mohammed, because its populist fervor aligns with his purges of elites and neutralization of any serious opposition. Foreign and state-controlled firms could replace the notables in delivering necessary private services. And the balance of payments could be stabilized with lower consumption and imports, particularly that of the royals and the rich.

The problem with this approach is that it would only delay the essential challenge of raising labor productivity. While other autocrats under pressure – such as Turkey’s Recep Tayyip Erdoğan and Russia’s Vladimir Putin – are increasingly choosing this myopic route of sacrificing the private sector on the altar of regime survival, the Kingdom can do better, given the assets at its disposal.

The alternative of an authoritarian ruling coalition of traditional elites is even less attractive to Saudi Arabia’s current rulers, as it would entail lower levels of consumption for ordinary people – and thus, in all likelihood, higher levels of repression. Domestic strife is the last thing the crown prince needs.

A better way forward requires more balance and better coordination. The pain of adjustment should be shared more widely among all groups, and reforms should focus much more on enlarging the economic pie.

This route is feasible, thanks to Saudi Arabia’s abundance of low-hanging fruit: a youthful society clamoring for social emancipation, better-educated women yearning for more participation, and millions of jobs created for expats available for nationals to fill.

What clouds this scenario is the low productivity of the elite private sector. To break free of its middle-income trap, Saudi Arabia needs to democratize, if not its politics, then at least its markets, through greater reliance on the rule of law and fair competition. Viewed from this perspective, Prince Mohammed’s current anti-corruption campaign will need to be followed by efforts to establish more inclusive rules for the private sector.

If the Kingdom’s private sector can be made to work, the economic challenge becomes modest. About 200,000 young people enter the labor market every year. If as many jobs are needed to allow women to join and to slowly wind down the public sector, two million new jobs would be needed over the next five years. To put this in perspective, there are now nine million foreign workers employed in the Kingdom.

Rather than new mega-investments in high tech, the difficult route of Saudization, initiated a decade ago, can gradually do the job, if augmented by greater support for competition and for small and medium-size enterprises. But the starting point is challenging, because public servants currently earn three times more than private-sector workers. To unify the labor market, a medium-term goal could be to reduce Saudi workers’ pay by a third, increase their productivity by a third, and subsidize the rest from state coffers.

The populist temptation promises at best an authoritarian, middle-income welfare state. Saudi Arabia would be better served by a strategy of economic and social inclusion that broadens the basis of political support by convincing all influential groups – royals, notables, and mere mortals – to view their short-term losses as an investment in the Kingdom’s future.

Ishac Diwan is an affiliate at the Belfer Center’s Middle East Initiative at Harvard University and holds the Chaire d’Excellence Monde Arabe at Paris Sciences et Lettres.

By Ishac Diwan

China’s Vision for the Next 30 Years

SHANGHAI – Every five years, the Communist Party of China convenes a National Congress, where two key decisions are made: who will lead China for the next five years, and what path to development those leaders will follow. The CPC’s recently completed 19th National Congress did all that and more.


Beyond choosing the next Politburo Standing Committee, the 19th Party Congress reelected President Xi Jinping as the CCP’s leader and added his eponymous ideology – “Xi Jinping Thought” – to the Party’s charter. The Congress also produced a blueprint for the country’s future development until 2050, one that reflects the changes that economic reform and opening have brought to China.

At the CPC’s 13th National Congress, in October 1987, China’s leaders declared that the “major contradictions” facing the country were those between “people’s growing material and cultural needs and the backwardness of social production.” In other words, the key challenge was to produce enough food, clothing, and books for all Chinese.

Thirty years later, the major contradiction China faces is that between “rising demand for higher standards of living and the constraints imposed by insufficient and unbalanced economic development.” In his address to the 19th Party Congress, Xi declared that, because China can largely deliver basic necessities to its people, the goal now should be to improve their quality of life.

With that in mind, the 19th Congress charted a new roadmap, based on the “two centennial goals” inherited from the 18th Congress. The first centennial goal is to build a “moderately prosperous society” (xiao-kang) by 2021, the 100th anniversary of the CPC’s founding. The key here is to ensure broad prosperity, with poverty all but eliminated.

The second centennial goal is to transform China into a “fully developed and advanced nation” by 2049, the 100th anniversary of the founding of the People’s Republic. The vision, confirmed at the Congress, is for China to be a prosperous, civilized, harmonious, and modern socialist society, boasting strong governance. Such a China would be a leading global power, ranking high among the advanced economies.

The 19th Party Congress went some way toward marking the path between these two goals, asserting that once the first centennial goal is realized, China’s next task will be to modernize Chinese society by 2035. Such a modern China would be a world leader in innovation, with a clean environment, a large middle class, and a much narrower gap between rural and urban growth, public services, and living standards.

Achieving these goals will require, first and foremost, that China’s leadership understands where in the development process China is. In this sense, it is promising that China’s leaders admitted at the latest Congress that China is and will remain in the primary phase of socialism. China must, therefore, put development first, with the expectation that economic growth will solve the country’s problems.

Given this, China’s top leaders promised that they would continue implementing structural reforms and advancing economic liberalization. This builds on a resolution, adopted at the Third Plenary Session of the 18th CPC Central Committee in 2013, to give the market the “decisive role” in allocating resources.

As the 19th Party Congress acknowledged, honoring these commitments will require China to protect private property rights and entrepreneurship. The importance of this is highlighted by the fact that the private sector contributes more than 60% of China’s GDP, 50% of its taxes, 70% of its technological and product innovations, and 80% of its jobs, despite accounting for less than 40% of inputs.

As for liberalization, China is committed to implementing policies to open up further its markets to trade and foreign investment, while protecting the legitimate rights and interests of foreign investors. As part of this effort, the government is authorizing further free-trade zones, and exploring the possibility of free-trade ports in selected locations.

It is believed that China is on track to achieve its goal of becoming a high-income economy by 2035. But it will have to sustain labor productivity growth of at least 5% annually for the next 15-20 years – an outcome that will depend on rising urbanization and deepening technological progress.

The key to success will be a Chinese leadership that adapts effectively to changing internal and external conditions and manages the risks that have accumulated in recent decades. For example, it must tackle growing income inequality, driven largely by the massive disparity between urban and rural incomes, though the income gap among urban residents is also widening. In 2014, per capita income was CN¥53,300 ($8,024) for the top 5% households and just CN¥1,600 for the poorest 5%.

According to China Household Financial Survey data, China’s Gini coefficient – the most common measure of inequality – climbed from 0.283 in 1983 to 0.491 in 2008, reaching highs of 0.61 in 2010 and 0.60 in 2012 (much higher than the official figures of 0.481 and 0.474, respectively). Though the Gini coefficient dropped to 0.465 by 2016, that still exceeds the 0.24-0.36 range for major developed economies.

China also faces increasing wealth disparity. In 1988 and 1995, China’s Gini coefficient of household wealth was just 0.34 and 0.4, respectively. But the coefficient has grown, peaking at 0.739 in 2010. By 2014, the poorest 25% of households owned less than 2% of the country’s total wealth, while the top 1% owned one third.

If China fails to contain inequality, its long-term growth could suffer. But with a clear development blueprint and a powerful leader whose political clout all but guarantees continued reform, China might be in a strong position to address the challenges it faces and sustain its unprecedented economic success.

Yet, even if China achieves its goals for 2050, the challenge will not be over, as China’s leaders will then have to contend with an aging population. By 2050, 36.5% of China’s population will be over the age of 60, according to the 2017 revision of the United Nations’ World Population Prospects. The median age may be as high as 49.6, quite close to Japan’s 53.3 and higher than in the Sweden, the United Kingdom, European Union as a whole, and the United States. This makes it all the more crucial for China’s leaders to make the right decisions and put their country on a stable footing by 2050.

Zhang Jun is Professor of Economics and Director of the China Center for Economic Studies at Fudan University.

By Zhang Jun

The Opiate of the Bosses

LONDON – Business ethics are again making headlines. This time, the focus is on the rapidly escalating opioid crisis that is destroying lives across the United States. While there is plenty of blame to go around, the largest share of the guilt belongs squarely on the shoulders of the major drug companies – Big Pharma.


The cynicism with which pharmaceutical firms have encouraged opioid drug use is appalling. Providing far too little analysis and oversight, they distribute opiates widely, alongside misinformation about how addictive the drugs truly are. Then they entice doctors with inducements and giveaways – including trips, toys, fishing hats, and, in one case, a music CD called “Get in the Swing with OxyContin” (one of the most popular opioids) – to prescribe them.

In 2007, several executives of the parent company of Purdue Pharma, which markets OxyContin, pleaded guilty to misleading doctors, regulators, and patients about the risk of addiction associated with the drug. The company was hit with some $600 million in fines and penalties.

Yet Big Pharma was undeterred. In the decade since, the distribution of opioid drugs has expanded substantially, driving a rapid increase in addiction and death rates. Multiple state attorneys general are now taking drug manufacturers – including Purdue Pharma, Johnson & Johnson, Endo Health Solutions, Inc., and their subsidiaries – to court for marketing and distributing their products by “nefarious and deceptive” means.

Of course, Big Pharma is treading a well-worn path. Energy companies have long been known to make false statements about climate change intentionally, just as mining companies and manufacturing firms, whether in clothing or tech, have persistently turned a blind eye to terrible, even abusive, conditions faced by their workers.

In 1994, the so-called Cigarette Papers, some 4,000 pages of internal documents leaked from the tobacco company Brown & Williamson, showed that the industry engaged for years in a public campaign to deny the addictive qualities of nicotine and the health hazards of smoking, despite industry-funded research showing otherwise. This year, new investigations, including by the World Health Organization, showed that major tobacco companies like Philip Morris have continued to use covert and illicit tactics to advance their business interests, at the expense of public health.

All of this highlights the fundamental flaw in the argument that large-scale deregulation, such as that advocated by US President Donald Trump, benefits societies. Yes, eliminating regulation can help companies to increase their profits. But at what cost?

The opioid epidemic, for example, has become a heavy burden for the US government (and thus taxpayers), as it has strained law enforcement and the health system. And that does not even include the costs borne by the epidemic’s victims and their families and communities. Even funeral directors are facing new risks and challenges, from dealing with overdose victims’ relatives to safe handling of victims’ bodies.

Meanwhile, the companies that have so gleefully enriched their executives and shareholders typically face little, if any, blowback from their illicit or unethical activities. Even when they do, other companies and industries don’t seem to learn from it – or, worse, they learn the wrong lessons.

The lesson pharmaceutical companies seem to have taken from the challenges to Big Tobacco is to hide their activities better, rather than to be better. Perhaps they assumed they would have more leeway, because they also produce life-saving medications.

The good news is that pressure on companies is mounting, not least because some investors are becoming jittery. Last month, a coalition of unions, public pension funds, state treasurers, and others established the Investors for Opioid Accountability. Bringing their collective $1.3 trillion in assets to bear, the coalition’s members plan to scrutinize the actions of boards of directors more closely, in order to strengthen accountability and encourage independent board leadership.

The Nobel laureate economist Milton Friedman famously argues that the only social responsibility of business is to maximize profits. But, when firms’ efforts to create shareholder value lead to such far-reaching consequences – or “externalities,” in economists’ parlance – for the rest of society, the argument that self-interest advances social welfare falls apart.

Physicians are bound by the Hippocratic Oath, which obliges them to do no harm and to uphold medical ethics. But companies, too, have an enormous capacity to do harm, and investments in corporate social responsibility initiatives or community projects do little or nothing to mitigate that harm or offset ethical breaches. If managers’ business strategies fail to reflect their companies’ social responsibility – or, worse, depend on ignoring it – they must be held accountable, just as rogue doctors are (or should be).

Leaders like Trump, who value protecting corporate interests above nearly all else, may be encouraging companies to believe that they have nothing to worry about. But they do. As the US opioid crisis shows, a bottom-line mentality inevitably increases the number of people directly harmed by companies’ behavior. And those people can no longer afford to ignore the lasting damage done to their environment, communities, and families.

Lucy P. Marcus is CEO of Marcus Venture Consulting.

By Lucy P. Marcus

Putting a Price on Rainforests

NEW YORK – In early October, shortly after Hurricane Maria made landfall in Puerto Rico, Tesla CEO Elon Musk said on Twitter that his company could, given the opportunity, rebuild the island’s electrical grid using solar power. Coming in the midst of so much human suffering, it was a bold claim. But from a technological perspective, the timing was perfect. By late October, solar panels and high-capacity batteries had been installed at San Juan’s Hospital del Niño, and additional projects are in the works.


This type of response to a natural disaster – replacing a fossil-fuel-reliant power grid with renewable energy – should be applauded. But no matter how clean and efficient renewable energy sources may be, they will never fully mitigate the climatic effects that are bringing more hurricanes like Maria ashore.

There is another way to do that, and it is far cheaper than what Musk has proposed.

Puerto Rico is home to one of the most efficient and inexpensive tools available in the fight against climate change: rainforests. On the island’s eastern tip, the nearly 29,000-acre El Yunque National Forest is one of the Caribbean’s most important systems for capturing and storing carbon.

Maria destroyed the forest, too. But tech CEOs have not tweeted about restoring that resource, because, at the moment, they see no viable business model for saving trees.

But what if such a model did exist? What if there were ways to make tropical forests worth more alive than dead?

Global leaders have been pondering this question for years. And, at UN climate talks, they have come up with a novel solution: an initiative called Reducing Emissions from Deforestation and Forest Degradation (REDD+). The idea is simple: with the right incentives, people, governments, and industries will preserve and restore tropical forests, rather than plow them under. In return, the world gets more carbon sinks to soak up greenhouse gasses.

REDD+, which has been around in various forms for nearly a decade, provides a payment structure for preservation and restoration efforts. By putting an economic value on forests for the role they play in large-scale carbon capture and storage, REDD+ allows standing trees to compete with lucrative land uses – such as logging or agriculture – that result in deforestation.

The first large-scale REDD+ program, an agreement between Norway and Brazil, was initiated in 2008. Norway agreed to provide $1 billion in “performance-based payments” to Brazil for successfully protecting its rainforests. The money from Norway was released in installments, as Brazil conserved its forests. The results were impressive: Brazil reduced the average rate of Amazon deforestation by over 60% over the last decade, absorbing about 3.6 billion tons of carbon dioxide, more than any other country. And Norway was able to help mitigate global carbon dioxide emissions.

But, despite the success of the pilot partnership, the REDD+ program today is in dire need of capital. In many ways, the solution is similar to Musk’s solar proposal in Puerto Rico. Only this time, the innovation is not technical, but financial.

Creating a market for REDD+ credits would create investment opportunities in tropical forest preservation for heavily polluting companies and industries. With an adequate policy framework, REDD+ credits could be offered through existing compliance markets – such as the carbon credit markets in California or South Korea – unlocking billions in additional capital for reforestation efforts.

Developing such a framework would also enable REDD+ to become part of future compliance systems, like the one being developed by the global airline industry to cap emissions, or the carbon-permit market that China plans to launch later this year. Integration into these markets could also tap new funding streams for forest conservation and reforestation, as it would allow financial intermediaries, like the REDD+ Acceleration Fund, to connect REDD+ projects directly with the private sector.

At the moment, most of this is aspirational. REDD+ is merely a set of guidelines, and a forest credit market will require rules and standards to govern how protection and reforestation allowances are allocated to buyers and integrated into current markets. Global leaders gathering this week for the UN climate change conference in Bonn, Germany, can aid these efforts by continuing to support the development of effective and transparent accounting mechanisms for REDD+ projects.

There is danger in delay. In the two years since the Paris climate agreement was adopted, deforestation increased sharply in Indonesia and parts of the Amazon, where much of the world’s largest and most vital tropical forests stand. According to the Union of Concerned Scientists, tropical deforestation is responsible for three billion tons of additional atmospheric CO2 annually – more than the world’s entire transportation sector.

No technology is as effective at storing carbon as tropical forests, and saving and restoring them offers one of the cheapest large-scale forms of emissions abatement or capture, while providing a host of other environmental and social benefits. To take advantage of this crucial hedge against a warming planet, more trees must remain standing. For those of us who believe that a forest credit market could provide critical means of protecting our planet our Musk moment is here. We must be similarly bold.

Lorenzo Bernasconi is Senior Associate Director of The Rockefeller Foundation.

By Lorenzo Bernasconi

Putting a Price on Rainforests

NEW YORK – In early October, shortly after Hurricane Maria made landfall in Puerto Rico, Tesla CEO Elon Musk said on Twitter that his company could, given the opportunity, rebuild the island’s electrical grid using solar power. Coming in the midst of so much human suffering, it was a bold claim. But from a technological perspective, the timing was perfect. By late October, solar panels and high-capacity batteries had been installed at San Juan’s Hospital del Niño, and additional projects are in the works.


This type of response to a natural disaster – replacing a fossil-fuel-reliant power grid with renewable energy – should be applauded. But no matter how clean and efficient renewable energy sources may be, they will never fully mitigate the climatic effects that are bringing more hurricanes like Maria ashore.

There is another way to do that, and it is far cheaper than what Musk has proposed.

Puerto Rico is home to one of the most efficient and inexpensive tools available in the fight against climate change: rainforests. On the island’s eastern tip, the nearly 29,000-acre El Yunque National Forest is one of the Caribbean’s most important systems for capturing and storing carbon.

Maria destroyed the forest, too. But tech CEOs have not tweeted about restoring that resource, because, at the moment, they see no viable business model for saving trees.

But what if such a model did exist? What if there were ways to make tropical forests worth more alive than dead?

Global leaders have been pondering this question for years. And, at UN climate talks, they have come up with a novel solution: an initiative called Reducing Emissions from Deforestation and Forest Degradation (REDD+). The idea is simple: with the right incentives, people, governments, and industries will preserve and restore tropical forests, rather than plow them under. In return, the world gets more carbon sinks to soak up greenhouse gasses.

REDD+, which has been around in various forms for nearly a decade, provides a payment structure for preservation and restoration efforts. By putting an economic value on forests for the role they play in large-scale carbon capture and storage, REDD+ allows standing trees to compete with lucrative land uses – such as logging or agriculture – that result in deforestation.

The first large-scale REDD+ program, an agreement between Norway and Brazil, was initiated in 2008. Norway agreed to provide $1 billion in “performance-based payments” to Brazil for successfully protecting its rainforests. The money from Norway was released in installments, as Brazil conserved its forests. The results were impressive: Brazil reduced the average rate of Amazon deforestation by over 60% over the last decade, absorbing about 3.6 billion tons of carbon dioxide, more than any other country. And Norway was able to help mitigate global carbon dioxide emissions.

But, despite the success of the pilot partnership, the REDD+ program today is in dire need of capital. In many ways, the solution is similar to Musk’s solar proposal in Puerto Rico. Only this time, the innovation is not technical, but financial.

Creating a market for REDD+ credits would create investment opportunities in tropical forest preservation for heavily polluting companies and industries. With an adequate policy framework, REDD+ credits could be offered through existing compliance markets – such as the carbon credit markets in California or South Korea – unlocking billions in additional capital for reforestation efforts.

Developing such a framework would also enable REDD+ to become part of future compliance systems, like the one being developed by the global airline industry to cap emissions, or the carbon-permit market that China plans to launch later this year. Integration into these markets could also tap new funding streams for forest conservation and reforestation, as it would allow financial intermediaries, like the REDD+ Acceleration Fund, to connect REDD+ projects directly with the private sector.

At the moment, most of this is aspirational. REDD+ is merely a set of guidelines, and a forest credit market will require rules and standards to govern how protection and reforestation allowances are allocated to buyers and integrated into current markets. Global leaders gathering this week for the UN climate change conference in Bonn, Germany, can aid these efforts by continuing to support the development of effective and transparent accounting mechanisms for REDD+ projects.

There is danger in delay. In the two years since the Paris climate agreement was adopted, deforestation increased sharply in Indonesia and parts of the Amazon, where much of the world’s largest and most vital tropical forests stand. According to the Union of Concerned Scientists, tropical deforestation is responsible for three billion tons of additional atmospheric CO2 annually – more than the world’s entire transportation sector.

No technology is as effective at storing carbon as tropical forests, and saving and restoring them offers one of the cheapest large-scale forms of emissions abatement or capture, while providing a host of other environmental and social benefits. To take advantage of this crucial hedge against a warming planet, more trees must remain standing. For those of us who believe that a forest credit market could provide critical means of protecting our planet our Musk moment is here. We must be similarly bold.

Lorenzo Bernasconi is Senior Associate Director of The Rockefeller Foundation.

By Lorenzo Bernasconi

The Moral Identity of Homo Economicus

CAMBRIDGE – Why do people vote, if doing so is costly and highly unlikely to affect the outcome? Why do people go above and beyond the call of duty at their jobs?


Two recent books – Identity Economics by Nobel laureate George Akerlof and Rachel Kranton and The Moral Economy by Sam Bowles – indicate that a quiet revolution is challenging the foundations of the dismal science, promising radical changes in how we view many aspects of organizations, public policy, and even social life. As with the rise of behavioral economics (which already includes six Nobel laureates among its leaders), this revolution emanates from psychology. But while behavioral economics relies on cognitive psychology, this one is rooted in moral psychology.

As with most revolutions, this one is not happening because, as Thomas Huxley surmised, a beautiful old theory has been killed by ugly new facts. The ugly facts have been apparent for a while, but people cannot abandon one mental framework unless another one can take its place: in the end, beautiful old theories are killed only by newer, more powerful theories.

For a long time, economic theory aspired to the elegance of Euclidean geometry, where all true statements can be derived from five apparently incontrovertible axioms, such as the notion that there is only one line that connects two points in space. In the nineteenth century, mathematicians explored the consequences of relaxing one of those axioms and discovered the geometries of curved spaces, where an infinite number of longitudinal lines can pass through the poles of a sphere.

The axioms underpinning traditional economics embody a view of human behavior known as homo economicus: we choose among the available options that which we want or prefer the most. But what makes us want or prefer something?

Economics has long assumed that whatever informs our preferences is exogenous to the issue at hand: de gustibus non est disputandum, as George Stigler and Gary Becker argued. But with a few reasonable assumptions, such as the idea that more is better than less, you can make many predictions about how people will behave.

The behavioral economics revolution questioned the idea that we are good at making these judgments. In the process, they subjected the assumptions underlying homo economicus to experimental tests and found them wanting. But this led at most to the idea of nudging people into better decisions, such as forcing them to opt out of rather than into better choices.

The new revolution may have been triggered by an uncomfortable finding of the old one. Consider the so-called ultimatum game, in which a player is given a sum of money, say, $100. He must offer a share of that money to a second player. If the latter accepts the offer, both get to keep the money. If not, they both get nothing.

Homo economicus would give $1 to the second player, who should accept the offer, because $1 is better than zero dollars. But people throughout the world tend to reject offers below $30. Why?

The new revolution assumes that when we make choices, we do not merely consider which of the available options we like the most. We are also asking ourselves what we ought to do.

In fact, according to moral psychology, our moral sentiments, on which Adam Smith wrote his other famous book, evolved to regulate behavior. We are the most cooperative species on earth because our feelings evolved to sustain cooperation, to put “us” before “me.” These feelings include guilt, shame, outrage, empathy, sympathy, dread, disgust, and a whole cocktail of other sentiments. We reject offers in the ultimatum game because we feel they are unfair.

Akerlof and Kranton propose a simple addition to the conventional economic model of human behavior. Besides the standard selfish elements that define our preferences, they argue that people see themselves as members of “social categories” with which they identify. Each of these social categories – for example, being a Christian, a father, a mason, a neighbor, or a sportsman – has an associated norm or ideal. And, because people derive satisfaction from behaving in accordance with the ideal, they behave not just to acquire, but also to become.

Bowles shows that we have distinct frameworks for analyzing situations. In particular, giving people monetary incentives may work in market-like situations. But, as a now-famous study of Haifa daycare centers showed, imposing fines on people who picked up their kids late actually had the opposite effect: if a fine is like a price, people may find that it is a price worth paying.

But without the fine, coming late constitutes impolite, rude, or disrespectful behavior toward the caregivers, which self-respecting people would avoid, even without fines. Unfortunately, this other-regarding view of behavior has been de-emphasized both in the corporate and the public domain. Instead, strategies have been derived from the view that all our behaviors are selfish, with the intellectual challenge being to design “incentive-compatible” mechanisms or contracts, an effort that has also been recognized with Nobel Prizes.

But, as George Price showed long ago, Darwinian evolution may have made us altruistic, at least toward people we perceive as members of the group we call “us.” The new revolution in economics may find a place for strategies based on affecting ideals and identities, not just taxes and subsidies. In the process, we may understand that we vote because that is what citizens ought to do, and we excel at our jobs because we strive for respect and self-realization, not just a raise.

If successful, the new revolution may lead to strategies that make us more responsive to our better angels. Economics and our view of human behavior need not be dismal. It may even become inspirational.

Ricardo Hausmann, a former minister of planning of Venezuela and former Chief Economist of the Inter-American Development Bank, is Director of the Center for International Development at Harvard University and a professor of economics at the Harvard Kennedy School.

Keeping US Policymaking Honest

BERKELEY – In a recent appearance here at the University of California, Berkeley, Alice Rivlin expressed optimism about the future of economic policymaking in the United States. What Rivlin – who served as Vice Chair of the US Federal Reserve, Director of the White House Office of Management and Budget (OMB) under President Bill Clinton, and founding Director of the Congressional Budget Office (CBO) – thinks about that topic matters a great deal. Indeed, America owes its current system of “technocracy” – which ensures that policymaking follows sound analysis and empirical evidence – more to Rivlin than to any other living human.


When she was younger, however, Rivlin was denied admission to the graduate program at Harvard University’s Littauer Center of Public Administration. Her application was rejected, she was told, because of “unfortunate experiences” with previous admissions of “women of marriageable age.”

In those phrases, you can almost hear the New England Puritans’ unctuous sermonizing about the seduction of Eve by the serpent, and her subsequent temptation of Adam. Of course, when Rivlin helped found the CBO in 1974 she was essentially eating from the Tree of Knowledge, and she was making the rest of us eat from it, too. We are all better for it.

In her recent talk, Rivlin expressed confidence that, despite today’s populist attacks on expertise, high-quality policy analysis will continue to flourish in the twenty-first-century public sphere. And she predicted that empirical evidence and expert knowledge will still carry substantial – if not full – weight in decision-making by legislators, presidents, and their advisers.

To be sure, the CBO has never been more influential than it is this year. Its influence has been felt not merely because of its role in congressional proceedings, but also because it offers assessments that are widely respected across government, the media, and civil society. Its estimates of how congressional Republicans’ legislative proposals will affect the country are deeply informed, nonpartisan, and made in good faith. So far, at least, it seems that Rivlin is right to be optimistic.

Still, I have my doubts about the future. Rivlin believes that there is a general consensus within policymaking circles about basic economic principles, and that those principles will underpin the assessments, estimates, and models used in public-policy debates. She pointed out that no reputable economists today regard a simple monetary-policy rule as a magic bullet for avoiding depressions and inflationary spirals, whereas many once did.

That is true, as far as it goes. And yet, until the announcement that Jerome Powell had been selected as the next Fed Chair, Stanford University economist John Taylor was a leading contender. Taylor is known for having developed his own guideline (the “Taylor rule”) for how central banks should set interest rates. And he has long clung to this rule, despite a lack of evidence that it would have delivered better results than the Fed’s actual policy decisions since the 1970s.

Moreover, when US President Donald Trump appointed former American Enterprise Institute economist Kevin Hassett to lead the White House Council of Economic Advisers, many expected that Hassett would be a “normal” CEA chairman. Hassett, we were told, would safeguard the CEA’s credibility, by ensuring that its estimates remained in line with those of the larger policy-analysis community. And he would understand that agencies and organizations such as the CBO, OMB, Joint Committee on Taxation, Tax Policy Center (TPC), and Center on Budget and Policy Priorities have a principal allegiance to facts, not to some donor or political master.

Yet Hassett has so far spent his time at the CEA tearing down TPC estimates, even though the organization will undoubtedly issue assessments in the future that are as inconvenient for his political adversaries as they are for him today.

According to the near-consensus among policy analysts, the share of corporate taxes borne by labor, and the share of lost revenues from a cut in corporate income tax that will be recouped through increased investment, are both 25%. Yet the CEA, under Hassett, now assumes that both are 82%. That claim, as well as Hassett’s recent attacks on the TPC, made former US Treasury Secretary Larry Summers angrier than I can ever recall having seen him with respect to a public-policy issue. According to Summers, Hassett’s analysis is “some combination of dishonest, incompetent, and absurd.”

Benjamin Franklin famously told the American people that the US Constitution would provide them with “a republic, if you can keep it.” In her long, distinguished career, Rivlin and others like her have provided us with a rational policymaking process – if we can keep it.

J. Bradford DeLong, a former deputy assistant US Treasury secretary, is Professor of Economics at the University of California at Berkeley and a research associate at the National Bureau of Economic Research.

By J. Bradford DeLong

Climate Leadership Means Ending Fossil-Fuel Production

VANCOUVER/BERLIN – The end of the fossil-fuel era is on the horizon. With renewables like solar and wind consistently outperforming expectations, growth in electric vehicles far exceeding projections, and governments worldwide acknowledging the urgency of tackling climate change, the writing is on the wall.


And yet somehow, the question central to it all is not being seriously addressed: what is the plan for weaning ourselves off oil, coal, and gas?

That question is becoming increasingly urgent, because governments around the world, from Argentina to India to Norway, are supporting plans to continue producing fossil fuels and explore for more. These governments claim that new fossil-fuel projects are consistent with their commitments under the Paris climate agreement, despite the fact that burning even the fossil fuels in already-existing reserves would push global temperatures higher than 2°C above pre-industrial levels – and thus far beyond the threshold established in that accord. It is a startling display of cognitive dissonance.

The reality is that limiting fossil-fuel production today is essential to avoid continued entrenchment of energy infrastructure and political dynamics that will make shifting away from fossil fuels later more difficult and expensive. Important questions about equity will arise: Who gets to sell the last barrel of oil? Who pays for the transition to renewables? And who compensates affected communities and workers? But, ultimately, these questions must be addressed, within a broader context of climate justice.

Climate change has been called the moral challenge of our age. This year alone, the world has faced unprecedented floods, hurricanes, wildfires, and droughts on virtually every continent. Yet the real storm is yet to come. If we are to avoid its most devastating impacts, phasing out coal – climate killer number one – will not be enough. A safe climate future requires ending the age of Big Oil.

The good news is that social change is not a gradual, linear process. Rather, it often happens in waves, characterized by “tipping point” moments brought on by the confluence of technological progress, financial incentives, political leadership, policy change, and, most important, social mobilization. We seem to be closing in on just such a moment.

For starters, technology is advancing faster than anyone thought possible. Twenty years ago, when we started working on climate issues, we sent faxes, made phone calls from landlines, and developed photos taken on 35mm film in darkrooms. Another 20 years from now, we will be living in a world that is powered by the sun, the waves, and the wind.

Moreover, popular opposition to fossil-fuel development is mounting, generating political pressure and financial and legal risks. Ordinary people everywhere have been working hard to halt projects inconsistent with a climate-safe future, whether by protesting against the Dakota Access Pipeline in the United States or the Kinder Morgan Trans Mountain Pipeline System in Canada; by joining the blockade by “kayactivists” of drilling rigs in the Arctic; or by using local referenda to stop oil and mining projects in Colombia.

Recently, over 450 organizations from more than 70 countries signed the Lofoten Declaration, which explicitly calls for the managed decline of the fossil-fuel sector. The declaration demands leadership from those who can afford it, a just transition for those affected, and support for countries that face the most significant challenges.

Wealthy countries should lead the way. Norway, for example, is not just one of the world’s richest countries; it is also the seventh-largest exporter of carbon dioxide emissions, and it continues to permit exploration and development of new oil and gas fields. Proposed and prospective new projects could increase the amount of emissions Norway enables by 150%.

If Norway is to fulfill its proclaimed role as a leader in international climate discussions, its government must work actively to reduce production, while supporting affected workers and communities during the transition. Canada, another wealthy country that considers itself a climate leader yet continues to pursue new oil and gas projects, should do the same.

Some countries are already moving in the right direction. French President Emmanuel Macron has introduced a bill to phase out all oil and gas exploration and production in France and its overseas territories by 2040; the Scottish government has banned fracking altogether; and Costa Rica now produces the vast majority of its electricity without oil. But the real work is yet to come, with countries not only canceling plans for new fossil-fuel infrastructure, but also winding down existing systems.

A fossil-free economy can happen by design or by default. If we build it purposefully, we can address issues of equity and human rights, ensuring that the transition is fair and smooth, and that new energy infrastructure is ecologically sound and democratically controlled. If we allow it simply to happen on its own, many jurisdictions will be stuck with pipelines to nowhere, half-built mega-mines, and stranded assets that weaken the economy and contribute to political polarization and social unrest. There is only one sensible option.

Citizens around the world are championing a vision of a better future – a future in which communities, not corporations, manage their natural resources and ecosystems as commons, and people consume less, create less toxic plastic waste, and enjoy a generally healthier environment. It is up to our political leaders to deliver that vision. They should be working actively to engineer a just and smart shift to a future free of fossil fuels, not making that future harder and more expensive to achieve.

Tzeporah Berman, former Co-Director of Greenpeace International’s Climate Program and co-founder of ForestEthics, is a strategic adviser to a number of First Nations, environmental organizations, and philanthropic foundations and an adjunct professor at York University. She is the author of This Crazy Time: Living Our Environmental Challenge. Lili Fuhr heads the Ecology and Sustainable Development Department at the Heinrich Böll Foundation.

By Tzeporah Berman and Lili Fuhr

Learning from Martin Luther About Technological Disruption

GENEVA – Five hundred years ago this week, a little-known priest and university lecturer in theology did something unremarkable for his time: he nailed a petition to a door, demanding an academic debate on the Catholic Church’s practice of selling “indulgences” – promises that the buyer or a relative would spend less time in purgatory after they died.


Today, Martin Luther’s “95 Theses,” posted at the Castle Church in Wittenberg, Germany (he simultaneously sent a copy to his boss, Cardinal Albrecht von Brandenburg), are widely recognized as the spark that started the Protestant Reformation. Within a year, Luther had become one of Europe’s most famous people, and his ideas – which challenged not only Church practice and the Pope’s authority, but ultimately man’s relationship with God – had begun to reconfigure systems of power and identity in ways that are still felt today.

What made Luther’s actions so momentous? After all, calls for reforming the Church had been occurring regularly for centuries. As the historian Diarmaid MacCulloch writes in A History of Christianity: The First Three Thousand Years, the two centuries before Luther featured near-constant challenges to papal supremacy on issues of philosophy, theology, and politics. How did the concerns of a minor theologian in Saxony lead to widespread religious and political upheaval?

A central piece of the puzzle is the role of emerging technology. A few decades before Luther developed his argument, a German blacksmith named Johannes Gutenberg had invented a new system of movable-type printing, allowing the reproduction of the written word at greater speeds and lower costs than the laborious and less-durable woodblock approach.

The printing press was a revolutionary – and exponential – technology for the dissemination of ideas. In 1455, the “Gutenberg Bible” was printed at a rate of roughly 200 pages per day, significantly more than the 30 pages per day that a well-trained scribe could produce. By Luther’s time, the daily printing rate of a single press had increased to roughly 1,500 single-sided sheets. Improved printing efficiency, combined with steep declines in cost, led to a dramatic increase in access to the written word between 1450 and 1500, even though only an estimated 6% of the population was literate.

Luther quickly grasped the potential of the printing press to spread his message, effectively inventing new forms of publishing that were short, clear, and written in German, the language of the people. Perhaps Luther’s most enduring personal contribution came via his translation of the Bible from Greek and Hebrew into German. He was determined to “speak as men do in the marketplace,” and more than 100,000 copies of the “Luther Bible” were printed in Wittenberg over the following decades, compared to just 180 copies of the Latin Gutenberg Bible.

This new use of printing technology to produce short, punchy pamphlets in the vernacular transformed the industry itself. In the decade before Luther’s theses, Wittenberg printers published, on average, just eight books annually, all in Latin and aimed at local university audiences. But, according to the British historian Andrew Pettegree, between 1517 and Luther’s death in 1546, local publishers “turned out at least 2,721 works” – and average of “91 books per year,” representing some three million individual copies.

Pettegree calculates that a third of all books published during this period were written by Luther himself, and that the pace of publishing continued to increase after his death. Luther effectively published a piece of writing every two weeks – for 25 years.

The printing press greatly expanded the accessibility of the religious controversy that Luther helped fuel, galvanizing the revolt against the Church. Research by the economic historian Jared Rubin indicates that the mere presence of a printing press in a city before 1500 greatly increased the likelihood that the city would become Protestant by 1530. In other words, the closer you lived to a printing press, the more likely you were to change the way you viewed your relationship with the Church, the most powerful institution of the time, and with God.

There are at least two contemporary lessons to be drawn from this technological disruption. For starters, in the context of the modern era’s “Fourth Industrial Revolution” – which Klaus Schwab of the World Economic Forum defines as a fusion of technologies blending the physical, digital, and biological spheres – it is tempting to assess which technologies could be the next printing press. Those who stand to lose from them might even move to defend the status quo, as the Council of Trent did in 1546, when it banned the printing and sale of any Bible versions other than the official Latin Vulgate, without Church approval.

But perhaps the most enduring lesson of Luther’s call for a scholarly debate – and his use of technology to deliver his views – is that it failed. Instead of a series of public discussions about the Church’s evolving authority, the Protestant Reformation became a bitter battle played out via mass communication, splitting not just a religious institution but also an entire region. Worse, it became a means to justify centuries of atrocities, and triggered the Thirty Years’ War, the deadliest religious conflict in European history.

The question today is how we can ensure that new technologies support constructive debate. The world remains full of heresies that threaten our identities and cherished institutions; the difficulty is to view them not as ideas that must be violently suppressed, but as opportunities to understand where and how current institutions are excluding people or failing to deliver promised benefits.

Calls for more constructive engagement may sound facile, naive, or even morally precarious. But the alternative is not just the hardening of divisions and estrangement of communities; it is widespread dehumanization, a tendency that current technologies seem to encourage.

Today’s Fourth Industrial Revolution could be an opportunity to reform our relationship with technology, amplifying the best of human nature. To grasp it, however, societies will need a subtler understanding of the interplay of identity, power, and technology than they managed during Luther’s time.

Nicholas Davis is Head of Society and Innovation at the World Economic Forum.

By Nicholas Davis

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…