Morality of Autonomous Vehicles

John Stuart Mill and Immanuel Kant on Automobile Ethics

Geon Woo Lee
12 min readJul 1, 2021

Despite hopes of an improved safety record, automobile collisions are a perpetual problem. Autonomous vehicles are no exception. They are still moving objects posing intrinsic physical risk that can harm itself and their surroundings. They are simply a matter of physics.[1]

Some may suggest that autonomous vehicles can simply stop or alert the driver to take control of the car, hence eliminating collisions. However, braking and relinquishing control will not be enough.[2] The car might not have enough distance to come to a complete stop before the collision. Also, the ultimate goal of autonomous vehicles should be to free humans from the burden of driving, let alone monitoring a car. Humans should be able to work or rest, including sleeping, while traveling inside an autonomous vehicle.

In addition to the intrinsic risk of moving objects, no technology ever proved to be perfect. Thus, autonomous vehicles are vulnerable to hardware failures, software bugs, perceptual errors, and reasoning errors.[3] Hardware failures, such as a brake system failure, may result from lack of maintenance or faulty equipment. These problems usually develop gradually, depending on the equipment’s lifespan, and can be somewhat predictable; thus, easier to identify and fix. Software bugs may result from hacking or discrepancies in the code; these problems are often unpredictable and could be disastrous.[4] Perceptual errors occur when the sensory system of vehicles misclassifies an object and misunderstands the surrounding environment. In fact, Waymo’s technology in 2014 failed to detect a small squirrel.[5] Presumably, the sensory system may not detect squirrel-sized objects like potholes, rocks, or other hazards — leading to an incomplete understanding of the environment. Even if the sensory system identifies an object correctly, the car needs to make a judgement that may lead to reasoning error. For example, if an autonomous vehicle identifies a pedestrian on the sidewalk, the car needs to judge whether the person is about to step onto the road or not. A mistake determining the intent of the pedestrian may lead to a fatal crash. For all of these reasons, autonomous vehicles, no matter the level of technology, will still lead to accidents causing injuries and deaths.

Fatalities caused by human drivers and autonomous cars have one important distinction. Given the same precarious situation, human drivers have the benefit of the doubt that their actions are based on instinctive reactions as opposed to programmed decisions.[6] If a conscious human driver detects unexpected danger and has to decide whether to swerve left, hitting a large SUV, or swerve right, hitting a motorcyclist, whatever the human driver decides to do is understood as an inadvertent, panicked reaction to avoid immediate danger. However, should an autonomous vehicle be in the same situation, the decision to swerve left or right is a programmed decision. It may not swerve at all endangering the passengers of the vehicle. If an autonomous vehicle causes deaths (or injuries) to any of the related individuals, the situation appears like premediated homicide decided seconds, if not years, before by the programmers.

That is an ethical decision. The programmed decisions of autonomous vehicles in life-threatening situations determine the victim. The decision to swerve left, endangering the passengers in a relatively-safe SUV, or to swerve right, most likely killing the motorcyclist, or to not swerve at all, endangering the occupants, is based on the car’s code. Programmers writing that code should be aware that their work reflects an ethical framework. Although reducing human morality to algorithms is a notoriously difficult process, discussions need to be held as early in the development of technology and forge our way forward.[7]

Ethical Frameworks

Ethical discussions about autonomous vehicles have been relatively new. Patrick Lin, a philosophy professor at California Polytechnic State University, has been an early advocate, writing about the topic as early in 2013.[8] However, industry experts did not begin to acknowledge the necessity of moral discussions until September, 2016. The “Autonomous Vehicles and Ethics” workshop, featuring a keynote speech by Lin, brought together researchers and engineers, most of whom have known each other since the DARPA challenges, to Stanford and prompted debates about the issue. Though three years have passed since the workshop, much remain largely unchanged. Consensus seems impossible. In order to guide the discussions, experts have unearthed principles in classical philosophy to justify their reasoning.

John Stuart Mill’s utilitarianism and Immanuel Kant’s deontology are particularly useful. Mill and Kant both offer constructive ways to think about machine ethics, as well as fallacies resulting from their framework.

John Stuart Mill’s Utilitarianism

As Mill writes in Utilitarianism in 1861, “Actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.”[9] In the simplest terms, the right action should maximize utility, defined as happiness, to the human being. However, one can easily argue happiness is a relative term, so Mill explains further, “Of two pleasures, if there be one which all or almost all who have experience of both give a decided preference, irrespective of any feeling of moral obligation to prefer it, that is the more desirable pleasure.”[10] So, if one is deciding between two choices, the correct choice should generate more utility than the other. Mill extrapolates his framework to a larger, societal scale and writes that the utilitarian end should desire “a good to the aggregate of all persons.”[11] In short, the goal of Mill’s utilitarianism is to maximize utility (happiness) to the greatest number of people.

In the context of autonomous vehicles, utilitarianism is remarkably quantifiable: minimize the number of fatalities and severity of injuries in an accident. The quantifiable nature of this framework — calculating the expected fatalities in a situation — has proven exceptionally popular. In 2016, when a survey asked how an autonomous vehicle should behave in dangerous situation, participants expressed a strong preference for cars to minimize the number of casualties at the expense of the passenger’s safety (Fig. 1).[12]

Immanuel Kant and Deontological Ethics

Immanuel Kant fundamentally opposes utilitarianism. Kant focuses on the intentions of an action instead on the consequences of action (utilitarianism). Hence, Kantian ethics offer another perspective to think about machine ethics. In his Grounding of the Metaphysics of Morals published in 1785, he writes, “human being’s moral capacity would not be virtue if not produced by the strength of his resolution in conflict with powerful opposing inclinations.”[13] In laymen’s terms, the right action should be decided based on the right determinations as opposed to deceitful inclinations. He then explains that the moral action should “consider each maxim as though it were to belong to an entire system of maxim making up a prospective system of moral laws for all rational beings.”[14] The intentions of individual choices aggregate to a larger framework of universal morality. Kantian ethics is deontological in that it is founded on the idea that doing what is right is based on doing one’s societal duty.[15] Deontological Kantian ethics formulate a strict system of laws that determine the right outcome.

This deontological perspective is useful for autonomous vehicles because it can determine actions based on a set of rules that are computationally tractable. Following this framework, constraints can be placed on the machine’s behavior or “a set of rules that it cannot violate.”[16] In short, an autonomous vehicle would follow a strict set of rules, coded by the programmers, that determines its behavior.

Utilitarianism desires to minimize the number of casualties; Kantian ethics act based on a system of invariable laws. Both systems are useful in the debate of machine ethics and, also, prone to a number of fallacies that are best shown through hypothetical scenarios.

Scenario 1: Rear-end collision

Your robotic car is stopped at an intersection and waits patiently for the children who are crossing in front of you. Your car detects a truck coming up behind you, about to cause a rear-end collision with you. The crash will likely damage your car to some degree and perhaps cause minor injury to you, such as whiplash, but certainly not death. To avoid this harm, your car is programmed to dash out of the way, if it can do so safely. In this case, your car can easily turn right at the intersection and avoid the rear-end collision. It follows this programming, but in doing so, it clears a path for the truck to continue through the intersection, killing couple children and seriously injuring others.[17]

The author, Patrick Lin, assumes that the brakes of the truck are not functioning or the truck is moving fast without enough distance to brake. Here, the driverless car follows Kantian ethics. Its behavior reflects a series of codes designed to ensure the safety of the passenger. However, in doing so, the truck crashes with children, severely endangering their lives. Is this the right ethical decision? The car essentially removes itself from the situation, allowing the truck to kill the children. If so, should the car be somewhat responsible for their deaths? The answer to the question is deeply troubling.

If not, consider the alternative: the car should not move ensuring the safety of children, however, placing the passenger at risk. This decision follows a utilitarian logic. The passenger might suffer from minor injuries, yet no one, especially children, is likely to die — maximizing utility by minimizing harm for all. However, who would be willing to ride, let alone buy, an autonomous vehicle that places a passenger at risk at the expense of multiple pedestrians? The same 2016 survey that found people’s preference for utilitarian cars has an answer:

Although people tend to agree that everyone would be better off if autonomous vehicles were utilitarian, these same people have a personal incentive to ride in autonomous vehicles that will protect them at all costs. Accordingly, if both self-protective and utilitarian autonomous vehicles were allowed on the market, few people would be willing to ride in utilitarian cars, even though they would prefer others to do so.[18]

A paradox exists. This simple example of a rear-end collision already reveals fallacies of both utilitarian and Kantian ethics.

Scenario 2: Does age matter?

Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed.[19]

In this scenario, Patrick Lin does not specify why both victims might be killed if the car does not move. However, there are still important revelations to be made. Some may argue that striking the grandmother might be the lesser evil. She has lived a full life with abundant experiences compared to those of the young girl, who has an entire life waiting in front of her. Yet, all readers would agree that both choices are morally wrong — the value of life should not be based on one’s age. But the decision to refuse a decision — to not swerve — seems much worse. The utilitarian motive prevents from killing both; it is better for one to die than two. The Kantian decision would be to follow a set of rules that arbitrarily decides the victim. However, a split-second decision choosing between lives dependent on a dispassionate code, written years before the incident by programmers, is as distasteful as the utilitarian decision.

Scenario 3: Helmet

You are barreling down the highway [in the middle lane] with heavy traffic in your self-driving car. Suddenly, a large, heavy object falls off the truck in front of you. Your car cannot stop in time to avoid the collision, so it needs to make a decision. Go straight and hit the object, swerve left into a motorcyclist wearing a helmet, or swerve right into a motorcyclist without a helmet.[20]

Here, Patrick Lin proposes an interesting case. He assumes that if the car goes straight, the passenger might be injured; if the car swerve left, the motorcyclist with the helmet might be injured; if the car swerve right, the motorcyclist without the helmet might die. Following utilitarian logic and presuming that the car prioritizes the safety of passengers, the correct decision is to hit the motorcyclist with the helmet. Thus, minimizing the severity of casualties by not killing anyone. However, this decision penalizes the responsible motorcyclist for ensuring his/her safety by wearing a helmet. Is this an ethical decision? If so, all of the motorcyclists would be better off by not wearing a helmet in the world of utilitarian autonomous vehicles.

As the three scenarios show, formulating a consensus on the morality of autonomous vehicles is an extremely difficult one. Utilitarian ethics and Kantian ethics offer a helpful framework to start thinking about those issues; however, both are with critical flaws.

Liability Concerns

Conventionally, the responsibility for a car accident lies with the human driver. As written so far, autonomous vehicles would crash and their behaviors reflect an ethical framework. If so, who should be responsible for damages and fatalities caused by autonomous vehicles? This current debate has suggested three solutions: the manufacturer, users of the autonomous vehicle with duty to intervene, and users assuming intrinsic risk.

Holding the manufacturer responsible seems like the most obvious solution. After all, manufacturers are “ultimately responsible for the final product.”[21] Their engineers have built the car and programmers have decided the algorithm of the car’s behaviors. It only seems logical to impute blame on the companies that have the agency to control the car’s actions. Present day litigation attributing liability to the unsafe human driver of a car accident stems from this fact. However, another concern arises. Scholars argue that if manufacturers were to be responsible for accidents caused by their products, the liability burden may hinder future development.[22] If liability concerns outweigh technological advances, manufacturers would avert from risk-taking and the whole idea of autonomous vehicles would render moot. No manufacturer would be willing to develop autonomous vehicles.

Another popular idea is to impute blame on the users of autonomous cars. There are two notions that back this idea: the duty to intervene and assumption of intrinsic risk. Scholars coined the term “duty to intervene” to describe a situation in which human users of autonomous vehicles need to be conscious of the car’s actions and have to intervene when necessary.[23] If a failure to do so causes an accident, the user of the vehicle must assume the liabilities of the accident. However, this notion defeats the purpose of an autonomous vehicle — to free from the burden of driving so that people can engage in more productive activities. Also, some scholars suggest that human drivers need about 40 seconds to regain full situational awareness, far longer than the split-second decisions that one might have to make in order to avoid fatalities.[24] Hence, if people need to be alert of the traffic situation in most of the times, though without physically steering the wheel, what are the benefits of autonomous driving?

Assumption of intrinsic risk denotes an idea where the users of autonomous vehicle do not need to pay attention as in duty to intervene, however, would assume the risk that naturally comes with riding an autonomous vehicle.[25] Using a car poses natural risk; hence if a person uses autonomous vehicles, he or she assumes the risk of riding, knowing and accepting that it could cause accidents. However, a moral problem arises when an accident does happen. The innocent user needs to assume liability for something that he or she has not directly caused. Though the user certainly has a minority stake in the responsibility by taking a car, the majority cause of autonomous vehicle crashes lies with the automated system not the user.

This article is an excerpt from a larger essay. Full-text available upon request.

[1] Thierry Fraichard, “Will the Driver Seat Ever Be Empty,” INRIA 8493 (March 2014), 3.

[2] Patrick Lin, “Why Ethics Matters for Autonomous Cars,” in Autonomous Driving: Technical, Legal and Social Aspects (Berlin: Springer Open, 2016), 71.

[3] Thierry Fraichard and James Kuffner, “Guaranteeing motion safety for robots,” in Autonomous Robots (Berlin: Springer, 2012), 173–175.

[4] Noah Goodall, “Machine Ethics and Automated Vehicles,” in Road Vehicle Automation (Berlin: Springer, 2014), 94.

[5] Alex Davies, “Avoiding Squirrels and Other Things Google’s Robot Car Can’t Do,” Wired, May 27, 2014, https://www.wired.com/2014/05/google-self-driving-car-can-cant/.

[6] Patrick Lin, “The Ethical Dilemma of Self-Driving Cars,” TEDed Animation film, December 8, 2015, https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin.

[7] Lin, “Why Ethics Matters for Autonomous Cars,” 69.

[8] Patrick Lin, “The Ethics of Saving Lives With Autonomous Cars is Far Murkier Than You Think,” Wired, July 30, 2013, https://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/.

[9] John Stuart Mill, On Liberty, Utilitarianism and Other Essays, ed. Mark Philip and Frederick Rosen (Oxford: Oxford University Press, 2015), 120.

[10] Ibid. 122.

[11] Ibid., 155.

[12] Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles,” Science 352 (October 2015): 1574.

[13] Immanuel Kant, Groundwork of the Metaphysics of Morals, ed. Mary Gregor and Jens Timmermann (Cambridge: University of Cambridge Press, 2012), 221.

[14] Thomas Powers, “Deontological Machine Ethics,” Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report (November 2005), 1.

[15] Ryan Tonkens, “A Challenge for Machine Ethics,” Minds & Machines 19, no. 3 (August 2009): 428.

[16] Goodall, 98.

[17] Lin, “Why Ethics Matters for Autonomous Cars,” 77.

[18] Bonnefon, Shariff, and Rahwan, 1575.

[19] Lin, “Why Ethics Matters for Autonomous Cars,” 69–70.

[20] Lin, “The Ethical Dilemma of Self-Driving Cars.”

[21] Gary Marchant and Rachel Lindor, “The Coming Collision Between Autonomous Vehicles and the Liability System,” Santa Clara Law Review 52, no. 4 (September 2012): 1329.

[22] Ibid., 1334.

[23] Alexander Hevelke and Julian Nida-Rumelin, “Responsibility for Crashes of Autonomous Vehicles: an Ethical Analysis,” Science and Engineering Ethics 21 (June 2014): 623.

[24] Lin, “Why Ethics Matter for Autonomous Cars,” 71.

[25] Hevelke, 626.

--

--