AI and the Trolley Problem

One of the delights of SF is that it often raises interesting philosophical questions; as Adam discovered, this is even true of the short story form…

In the Trolley Problem it is proposed that you are standing by a lever next to tram tracks. The lever controls a switch through which you can change the path of the tram (if this was set in Adelaide it would be less of a dilemma, as here it is impossible for a tram to turn right). You see a tram coming towards you and realise that if it continues to follow the current path five people will be killed. You can save them, but only by pulling the lever and sending the tram down the alternative track. However, there’s another person on that route as well, so if you act to save the five people by pulling the lever, you will in turn kill a person who would not otherwise have died. Should you pull the lever? 

In Pat Cadigan’s aptly named short story, AI and the Trolley Problem, she explores this dilemma using an AI (Felipe) who chooses to pull the lever, and a philosopher (Helen Matthias) who is trying to understand why. It is an enjoyable exploration, but just as the original Trolley Problem was intended to generate discourse rather than point people towards a solution, Cadigan’s story left me more interested in the nature of her exploration than in the story in which it was framed.

To give some background to the issue, one of the biggest disputes in philosophical ethics is that between the deontologists and the utilitarians. Utilitarians represent the “end justifies the means” camp – the best action in any given situation is that which brings about the most good. A true utilitarian wouldn’t care about what you do, as in this framework actions are not right or wrong, and equally they wouldn’t particularly care about your reasons. What matters is the result. Kill one to save 500? Of course. Steal a car to save a life? Naturally. On the other hand, deontologists care about the action. Perhaps deontology is best represented through the laws of a legal system or (if you are thus inclined) religious commandments. If it is wrong to kill, it is always wrong to kill. If it is wrong to steal, then stealing is out, irrespective of what happens as a result.

The Trolley Problem can be seen in different perspectives in light of this distinction. If we tackle it from a deontological perspective, the question is not whether it is right to kill one person to save five, as that is a results-based concern. Instead there’s a different set of issues, starting with two issues. The first is about choosing to act vs choosing not to act. Are both choices equally culpable? If you choose to kill one person in order to save five, you have clearly made a choice. But are you also making a choice if you do not act? Is it more ethically wrong to choose to kill one person than to allow five to die through inaction? The second concern is what is called the Principal of Double Effect. Sometimes an action that we perform for the right reasons can, as a side effect, lead to negative consequences, even if we did not desire that result. However, if we take a deontological perspective, this may not mean that the action was wrong. We see this applied in bioethics, and in particular in euthanasia – if you give someone high doses of barbiturates to reduce their pain, you may, as a side effect, shorten their life. However, this does not mean that you were trying to bring about their death, as their death may not have been the intended outcome of the action you performed. (Indeed, the principal goes all the way back to Thomas Aquinas in the 13th century CE, who argued that it was wrong to kill, but it was ok to kill an assailant to save your own life if that was a side effect, rather than the intended outcome, of your self defense).  With the Trolley Problem it can be argued that pulling the lever to save five is a good thing, as the intention was to save five lives. The fact that someone then died is a side effect of your action, not the intended result: after all, it wasn’t pulling the lever that killed the person on the tracks, it was the tram.

If we turn around and take a utilitarian perspective, the Trolley Problem faces different issues. In its current form there is unlikely to be a problem, as it should be relatively easy to weigh up the two possible outcomes: one life vs five. All else being equal, it might seem that it is always better to save five lives than to save one. Where we get a dilemma is when we extend this to other scenarios, such as those proposed by the moral philosopher Judith Jarvis Thompson. For example, what if you are not standing by a switch, but are instead on a bridge overlooking the coming disaster? The only way to stop the tram is to push the person next to you onto the tracks, and this will cause the tram to stop in time to save the other five. However, while the result is the same (one life vs five) many of us would have a lot more trouble accepting the second scenario than we would the first. A third scenario is to kill one healthy person in order to transplant their organs into five sick people, and this is seen as even less likely to the acceptable than the second. Intuitively many of us feel that these scenarios are different, and while we might agree to pull the lever in the first situation, we are likely to be less likely to push someone off the bridge or to sacrifice one healthy person in order to take their organs. Which in turn suggests that focusing only on the outcome may not match out intuitive beliefs of what is right or wrong.

In Pat Cadigan’s AI and the Trolley Problem, she has reformulated the problem to be based around the use of a military drone. A drone has sent by the US military to target terrorists, but in doing so it has the high possibility of killing non-combatants. The AI in Cadigan’s story decides that fewer lives would be lost if the drone was re-targeted to hit the US operators. Thus it has a Trolley Problem-like choice: allow the drone to continue down its path and potentially kill many people, or turn it around to target the operators, sacrificing their lives to save a greater number. The AI chooses to target the operators, and Cadigan’s philosopher is tasked with trying to understand the AI’s reasoning and whether or not it would do so again.

In the discussion between the AI and the philosopher, one aspect that I found intriguing was that the deontological approach was never used. Instead we have an application of utilitarianism. It was possible that the Principal of Double Effect could have been applied, but that is knocked out quickly when the AI states that it intended to kill the operators in order to prevent a replacement drone being sent: “If I had had access to that drone, I could have rendered it unusable, but then the authorities would have found another. The only choice was to keep the train from leaving the station at all.” Alternatively, the philosopher and the AI could have tackled this from asking whether or not intervening to kill someone can ever be right, but they don’t.

Instead the problem in couched in terms of utilitarian equations: “The deaths were unfortunate, but there were fewer casualties than there would have been if the drone had achieved its target and completed its mission.” It is a valid point, but as is almost always the case when we try to apply utilitarian approaches to the complexity of real world problems, the equation is never that simple. The first issue is the distinction between absolutes and possibilities. In the Trolley Problem, we know with absolute certainty that either five people will die, or one person will die. In Cadigan’s situation, the AI believes that there was a “ninety-percent possibility that at least a dozen noncombatants would be seriously injured or killed, and many more would suffer extreme adversity”. It is certainly a significant concern, but once we start weighing possibilities against absolutes, the equation becomes much harder to work out.

But it is a second issue that Cadigan focuses on. Utilitarian equations are difficult because it is almost impossible to take into account all of the possible factors. As the philosopher in the story tells the AI, “If the terrorists aren’t stopped—and it looks like they won’t be—they’ll be responsible for a much greater loss of life. The physical and psychological harm will be even more considerable.” Actions can have complex consequences, but without some wondrous ability to see into the future, how are we going to evaluate this? The philosopher might be right, and those terrorists will kill in the future, but how can we be sure? This is one of the biggest problems with basic utilitarianism: it requires that we predict the future.

The equation is also where Cadigan differs from the original formulation of the Trolley Problem, as it requires us to evaluate lives of (seemingly) unequal worth. In the Trolley Problem as presented by Phillipa Foot, both the five who are going to die if you do not act and the person who will if you do are innocents. As far as we know they have committed no crimes, and are no more or less deserving of life as any of the others. There is a notable version where this changes, though. In “the fat villain” scenario, you can save five lives by pushing one evil villain onto the track. With Cadigan’s account, the AI chooses to kill soldiers—specifically those who sent and are controlling the drone—rather than true “innocents”. And those at the other end may include terrorists, but Cadigan isn’t specifically defending their lives. Instead the AI discusses the non-combatants (presumably true innocents in the equation) as the true cost of the operation. Cadigan is asking if it is permissible to kill those controlling the drone with the intent of killing others to save the lives of innocent non-combatants while taking the lives of terrorists. It is a much more complex and value-laden equation than that proposed by Foot. 

Which makes me curious: how would the AI in the story have acted if it didn’t have the option of killing the soldiers? Instead, let’s say that the AI could only take control of the drone at the end of the flight, and to save the non-combatants and the terrorists it was forced to send the drone into a bus, killing six innocent non-combatants to save 12 and the terrorists? Or, to make it more emotionally confronting, we make it a school bus. Five children and one bus driver in return for terrorists and 12 non-combatants? The AI’s decision would tell us a lot about how it thinks.

Cadigan’s story raises some interesting questions about how an AI could handle ethics, mixing the rules-based approach that we connect with programming with the sort of mathematical utilitarian equations that we can imagine a computer making. It is a highly relevant problem today, as we are facing exactly this issue with driverless cars: if a car needs to make a decision about protecting the passengers or other people, which should it do, and how should it make this choice? It isn’t just an intellectual exercise, but a highly practical one. Nevertheless, now it is time for a confession. When I read the story I was taken by how the Trolley Problem was explored, but as Roman has subsequently pointed out to me, I missed the point – Pat Cadigan was really trying to make us think about the morality of drone combat. Sadly, I think it says a lot about me that I probably missed the main point to focus on the questions it raised, but I guess I spent way too much time studying philosophy, and nowhere near enough time engaging with real life.

Comments are closed.