A fatal car accident involving a Tesla has renewed concerns regarding the safety of self-driving and autopilot cars. (Just to be clear – Tesla’s autopilot is meant to assist drivers, not to replace them.) Additionally, new research published in Science investigates people’s attitudes regarding “decisions” self-driving cars could make in emergency situations. Sounds like the perfect opportunity to talk about trolley dilemmas and artificial intelligence (AI).
There are two classic trolley dilemmas. They attempt to get at our moral considerations so try to set aside your understanding of any legal implications. In the first scenario, you are aboard an out-of-control trolley speeding downhill towards five oblivious people standing on the tracks. The brakes have failed and the trolley conductor is incapacitated. However, as a frequent passenger of the trolley, you know there is a side track and which lever to pull to make the switch. Unfortunately, there is one more oblivious person standing on that track. What should you do? Most folks take a utilitarian approach and argue that pulling the lever, at the unfortunate and unintended cost of one life, is the way to go. In the second version, our five oblivious trolley targets remain but there is no side track and you are not a passenger. Instead, you are an observer standing on a walkway overlooking the tracks. While there is no lever to pull, there is a rather large person standing close to the edge of the walkway right above the tracks. Understanding trolleys, mass and force pretty well (and setting aside the fact that the walkway appears to be poorly designed), you know that you could push the large person onto the tracks and stop the trolley before it hits the five. Of course, there is no way the larger person survives. The utilitarian calculation is identical (5 alive; 1 dead) but if you are like most people, pushing the large person versus pulling the lever feel significantly different. Many people consider the intimacy of pushing a human to his end versus pulling a lever to justify differences in their proposed actions. From a moral perspective, this is a distinction without a difference as it makes your concerns and feelings at the center of the matter while the lives of 6 people hang in the balance. Shame on you, you selfish jerk! There are moral ways to justify the pulling and not the pushing. Most are post-hoc rationalizations that fall short of explaining the underlying moral dumbfounding brought out by these trolleys, but let us move on.
Self-driving cars are turning hypothetical trolleys into real-world coding decisions. For example, self-driving cars will have to know the answer between hitting a couple of absent-minded pedestrians wandering into traffic or swerving to the right and striking a cyclist. The Science study has most people favoring utilitarian outcomes, save as many people as possible, in these situations. More interestingly, self-driving cars will add a complication to the trolley dilemma – the lives of the onboard passengers. Most still favor outcomes based on potential lives saved but involvement of the self and/or family members made respondents less resolute. The coding complication do not end there.
- Should the number and/or age of the passengers be factored into the code?
- Should owners of self-driving cars be able to adjust a possible accident avoidance menu or should these be legal preconditions like wearing a seat belt?
- Should someone driving in autopilot mode (not fully self-driving) be able to assume full control in an emergency or should programming take control?
- Consider analogies to other forms of transportation. The passengers on Captain Sully’s plane had no say in how those fateful moments played out. Are passengers in a self-driving car like passengers on a plane?
- Who is legally responsible for the safety of the passengers in a self-driving car? The person in the “driver’s” seat? The owner of the car? The car maker?
- Will self-driving cars be allowed on the road without a driver?
Beyond self-driving cars, whether AI reduces the need for human workers or just alters work that people do is open for debate. The effect of AI on work or personalized transportation speaks to concerns about control and how we maintain our humanity. Of course, AI is not the first advance to challenge the idea of human uniqueness. In 1543, Copernicus challenged humanity by yanking us out of the center and onto a nondescript edge of the known universe. In 1859, Darwin pulled us onto one of many branches of the tree of life. Several generations later and it seems that humanity survived these and other challenges to our uniqueness. Might AI be something else entirely? Copernicus and Darwin present like traditional trolley dilemmas. Interesting ideas worth discussing to be sure, but whether we pull a hypothetical lever or decide that evolution is true, our lives are largely unaffected. On the other hand, the highways and byways of AI force humanity into the passenger seat. We will have to decide where, in all of this code, to place our humanity.