Technology, AI and ethics.

Moral machines: Stop discussing thought experiments

 

Moral machines: Stop discussing thought experiments

by Isabel Schünemann

Autonomous machines like robots and self-driving cars should serve human beings. But what should they do in situations when they can’t serve everyone? To find an answer to that question we should stop discussing moral thought experiments. Instead, we need to start collecting data and creating dialogue. 

A self-driving car faces an unavoidable crash. It only has two options: It can either drive straight and kill an innocent pedestrian, or swerve and crash into a wall, killing its passenger. What should it do? If you haven’t already come across a similar story, and you are unsure how to respond, don’t worry. There is no straightforward answer to what the car should do. This dilemma is inspired by the trolley problem, a famous philosophical conundrum. It is probably the most discussed scenario depicting the challenges of developing autonomous machines today.

With the rapid advances in artificial intelligence, self-driving vehicles, robots and other intelligent machines will soon frequently face moral choices.[1]There are good reasons though why such dilemmas as described above shouldn’t stop us of from the development autonomous machines. Self-driving cars are expected to be much safer than human drivers and, overall, the scenario seems to be extremely rare, although realistic. But what we can learn from discussing these extreme cases is what the automation of moral choices will confront us with, how we should approach developing solutions, and how we shouldn’t.

Turning an intuitive reaction into intent

The trolley problem represents so-called distributive moral dilemmas, situations characterized by the question ‘How should an actor decide when immoral behavior is to some extent inevitable?’. While much of debate on the ethics of algorithms has been focused on eliminating biases and discrimination, and thus about ‘getting right’ what we know we want to achieve, we don’t know what the right solutions for cases of distributive moral dilemmas are yet. In these moments, human moral judgement greatly considers human nature, a comfort we don’t give to machines. A human driver confronted with the trolley problem is not expected to make a well-reasoned decision. If a driver didn’t speed or take any other illegal action to cause the situation, he or she will face no condemnation for their intuitive reaction in that moment. Machines, however, make calculated decisions in a split of a second. And their development demands directives on the outcome of moral decisions upfront, effectively turning an intuitive human reaction into deliberate intent. Because we haven’t been confronted with intentional immoral behavior of humans in these dilemmas, we haven’t established desirable outcomes for machines.

But even if it was possible to find out what would theoretically be the ‘right’ decision in a moral dilemma, it wouldn’t necessarily bring us closer to the development of moral machines.

Moral theories are not enough

It’s not like humans hadn’t thought about what would be right and wrong in these situations. Moral philosophers gave distributive dilemmas like the trolley problem a great deal of thought. And over the last two decades, machine ethicists most often turned to these discussions in their quest to create pathways for the development moral machines.[2] But moral philosophy never developed unanimously agreed-upon answers to these dilemmas. Hypothetical events like the trolley problem were initially designed only as thought experiments to discuss different approaches of morality, not as real-life challenges to be solved. Should we maximize for net social benefit, like utilitarian moral theory preaches, that is to steer the trolley into a direction that will save as many lives as possible? Or should we take a deontological approach, that is to avoid taking any proactive action that would lead to harming someone?

The lack of consensus on what is the right choice in a moral dilemma is largely because humans are notoriously inconsistent in their moral judgement. For example, many people generally agree with the idea of utilitarianism, that is to maximize overall social benefit. In the trolley problem they would want to save as many lives as possible, even if that means to take action and steer the trolley away from a larger group on the rails and towards a smaller group of bystanders.[3]But taking action and, for example, killing a healthy person in a hospital to donate his or her organs to save 10 sick people goes against most people’s intuition, even though it would save as many lives as possible. The practical implications of human moral judgement are thus limited: Studies found that most people would explicitly even want to save as many lives as possible in an unavoidable crash of an autonomous vehicle if this would mean to sacrifice its own occupant. However, people also said that they wouldn’t want to buy a self-driving car that may eventually sacrifice them to safe others.[4] It is because of these contradictions that no general consensus on moral principles ever evolved.[5] Humans simply don’t adhere consistently to moral theories. Thus, the chances derive general guidance from human moral behavior to develop machines capable of dealing with dilemmas are slim.

Thought experiments cannot determine how humans will react to the widespread presence of autonomous machines in our everyday lives. To go ahead with the development of truly autonomous systems we thus need to invest more time and effort into analyzing the impact of their choices

The need for more data and dialogue

But even if it was possible to find out what would theoretically be the ‘right’ decision in a moral dilemma, it wouldn’t necessarily bring us closer to the development of moral machines. Suppose we eventually followed utilitarianism how would we define social benefit and identify actions that maximize it? Whose interests, well-being and lives have which value? And which impact would these choices have on how we interact with moral machines?

Researchers at MIT Media Lab recently used crowdsourcing to gather data on some of these questions. Millions of individuals from 233 countries gave over 40 million answers to various scenarios of the trolley problem on an online platform.[6] The scenarios allowed choices on nine different attributes of the casualties, such as gender, social status, fitness or the overall number of lives lost. One of the three attributes that received considerably higher approval than the rest is the preference to spare younger over older humans. But what would public life look like if we followed this result? If a self-driving car is programmed to rather spare the young than the elderly in an inevitable crash, senior citizens would probably refrain from traffic.

The example illustrates that the true challenge of developing moral machines isn’t finding the right answer to a single moral dilemma. It is anticipating and managing the systemic impact of the automation of moral choices: Imagine the incentives we would create if self-driving cars would treat those cyclists without a helmet with more caution than those who protect themselves? Or what traffic would look like if everyone knew that self-driving cars invariably stop for pedestrians on the road? Who would wear a helmet on a bike, or even care about jaywalking anymore?

Thought experiments cannot determine how humans will react to the widespread presence of autonomous machines in our everyday lives. To go ahead with the development of truly autonomous systems we thus need to invest more time and effort into analyzing the impact of their choices. For this we especially need studies and simulations that explore how humans would change their own behavior in interaction with these machines. And we need to establish processes of public participation to develop a common desirable future with them. Instead of continuing the past debate of moral machine behavior alone we should look forward and discuss a desirable future in which humans and moral machines coexist.

  • [1]

    Kraemer, F., Overveld, K., & Peterson, M. (2011). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251–260.

  • [2]

    Lin, P. (2017). Autopia: The Robot Car of Tomorrow May Just Be Programmed to Hit You. Free Inquiry, 37(3), 40.

    Millar, J. (2016). An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars. Applied Artificial Intelligence, 30(8), 787–809.

  • [3]

    Awad, E., et al. (2018). The Moral Machine Experiment. Nature, 563(7729), 59–3.

  • [4]

    Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573.

  • [5]

    Kymlicka, W. (1993). Moral Philosophy and Public Policy: The Case of NRTs. Bioethics, 7(1), 1–26.

  • [6]

    Awad, E., et al. (2018). The Moral Machine Experiment. Nature, 563(7729), 59–3.

Share Post
Isabel Schünemann

Isabel Schünemann is a tech lover and innovation idealist. She is a McCloy-Fellow at Harvard University and the Harvard Kennedy School and a research assistant at the Berkman Klein Center for Internet & Society.

Show Comments