Who’s more likely to throw you in front of a runaway trolley in order to save a bunch of people’s lives — someone from America or someone from China?

That might sound like a bizarre question, but psychologists and philosophers are interested in it because it helps us get at an underlying question: To what extent does our cultural context shape our morality?

We now have a ton of new data on this, thanks to a cross-cultural study published this week in Proceedings of the National Academy of Sciences. By getting 70,000 participants in 42 countries to respond to sacrificial moral dilemmas — the largest study of this kind to date — an international team of psychologists was able to show how culture influences moral decision-making.

Participants were presented with multiple versions of a classic dilemma known as the trolley problem, which asks: Should I make the active choice to divert a runaway trolley so that it kills one person if, by doing so, I can save five people along a different track from getting killed?

The study found that participants from Eastern countries like China or Japan were less inclined to support sacrificing someone in trolley problems than participants from Western countries like the United States.

Naturally, the next question is: What’s driving this cross-cultural difference in moral preferences? Does it have to do with each country’s religiosity? Its emphasis on individualism? Its gross domestic product?

The authors suggest a different variable is doing most of the work here: relational mobility, or the ease with which people in a given society can develop new relationships. The study found that relational mobility was a strong predictor of the tendency to support sacrificing one person, even after controlling for religiosity, individualism, and GDP.

If you live in a society with high relational mobility, like the US, you’ve got lots of options for finding new friends, so it’s not such a big deal if your current friends ditch you. But if you live somewhere with low relational mobility, you have fewer chances to develop new friends, so you’re going to be extra careful to avoid alienating your current ones.

“People in low relational mobility societies may be less likely to express and even hold attitudes that send a negative social signal. Endorsing sacrifice in the trolley problem is just such an attitude,” the study says, adding that the pressure of living in these societies might make certain ideas “morally unthinkable.”

The study shows that our beliefs about what’s moral are, at least to some degree, products of our cultural context. But, intriguingly, the study also shows that there are some universals in human morality.

“This is something philosophers have disagreed on, with some saying ethics are universal and some saying it’s subjective,” co-author Edmond Awad of the University of Exeter told me. “It turns out there’s evidence to support both views.”

Using trolley problems to find out what all cultures agree on — and where they diverge

We often talk about the trolley problem as if it’s one thing, but there are actually multiple versions of the thought experiment. The researchers tested three versions — dubbed Switch, Loop, and Footbridge — which helped them to identify both cultural universals and variations in moral decisions.

In the Switch version, a trolley is about to kill five workers but can be redirected to a different track where it’ll kill only one worker.

In the Loop version, the trolley can be redirected to a side track that later rejoins the main track. On the side track, it will kill one worker whose body will stop the trolley before it can kill the five on the main track.

In the Footbridge version, a large man can be pushed in front of the trolley. He’ll die, but his body will stop the trolley from killing the five workers on the track.

Participants made decisions in three scenario variants: Switch, Loop, and Footbridge.
PNAS

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

Regardless of the exact reasoning behind it, it seems this ranking pattern is a cultural universal in moral psychology (although it’s possible some not-yet-studied culture might turn out to have a different view). According to the study, this suggests we can chalk it up to “basic cognitive processes.”

But where cultures do show variation is in how strongly they endorse or reject each sacrifice. You can believe it’s more moral to act in a Switch scenario than in a Footbridge scenario, but still be very against acting even in Switch, as participants in China and Japan demonstrate.

Religious norms there may be playing a role. “Trolley problems result from trying to apply abstract rules to practical reasoning and require us to distance ourselves from all the potential victims,” said Philip Ivanhoe, director of the Sungkyun Institute for Confucian Studies and East Asian Philosophy, who was not involved in the study.Both Buddhism and Confucianism take kindness or compassion as primary virtues, and no matter what one does in a trolley problem, one cannot be kind.”

But the authors of the study suggest that low relational mobility may be playing a greater role, as it causes people to “experience greater pressure against holding opinions that mark them as untrustworthy.” They cite the findings of another psychologist, Molly Crockett at Yale University, who has shown that we’re much more inclined to trust — and therefore want to befriend, date, or marry — people who reject sacrifices for the greater good.

“When it comes to sacrificial dilemmas,” Crockett told me, “we trust people a lot more if they say it’s not okay to sacrifice one person to save many others.”

Whereas Crockett has demonstrated this to be true in Western societies with high relational mobility, the study’s authors suggest they’re expanding on her work by exploring the effect in Eastern societies with low relational mobility.

The study has important limitations, but also important implications

Despite its impressively large dataset, this study has a number of significant limitations. The participants were all volunteers in an online experiment: MIT’s Moral Machine website, which was initially designed to collect responses on the moral acceptability of decisions made by self-driving cars, but which also offered a “classic mode” allowing researchers to collect other sorts of responses.

“Our sample is skewed in terms of age, gender, and education: We estimate that a third of our participants were young, college-educated men,” the authors note. They then acknowledge another problem: “We focused our analysis on relational mobility because of its theoretical interest, but one limitation of this strategy is that relational mobility has not been estimated yet in all of the countries represented in our dataset.” (Although we know from questionnaires how much relational mobility people feel they have in a given country, in some countries not enough people have filled out the questionnaires to ensure the data is robust.)

On the plus side, the researchers are making their huge dataset public, so others will be able to use it and add to it.

In the meantime, the study has important implications for how we understand our moral decisions. They don’t arise out of some universal, ahistorical, hermetically sealed realm of pure reason; rather, they’re shaped by cultural norms.

The study may also have implications for how we program machines to make decisions in the age of artificial intelligence. Take self-driving cars, for example. “Sacrificial dilemmas provide a useful tool to study and understand how the public wants driverless cars to distribute unavoidable risk on the road,” Awad said.

Should policymakers take into account how moral preferences differ across countries when regulating future programming? Will we want different rules for machines in different countries? These are still very much open questions.


Sign up for the Future Perfect newsletter and we’ll send you a roundup of ideas and solutions for tackling the world’s biggest challenges — and how to get better at doing good.

Future Perfect is funded in part by individual contributions, grants, and sponsorships. Learn more here.

Posts from the same category:

    None Found