First Ten pages

After an extended intermission, here are my first Ten plus pages of my draft:

Cars which can drive themselves seem to be the subject of science fiction. They can be seen in all sorts of futuristic settings in films like Minority Report and I Robot; easily navigating crowded urban roadways while their owners sit and are entertained or begin their day’s work. At first, these cars seem like a wondrous convenience of the future, allowing people to turn their commute into time to work or prepare for the coming day. The promise of self-driving cars seem immense, but, as the film I Robot demonstrates, they come with their own dangers. In I Robot detective Del Spooner, played by Will Smith, is set upon by two commercial transports and their contents of activated robots in an effort to silence his attempts to uncover the truth about the death of an eminent robot scientist. This scene provides an excellent point for the complication of the idea of self-driving cars and modernity in general. The idea that these systems could be hacked or fully turn against humanity proves to be a fear which is ubiquitous in the annals of science fiction. Yet, the possibility of a conflict between humans and their robotic servants seems to be a far flung prospect when confronted with the state of contemporary robotics and automation. While there are factories filled with very articulate and precisely programmed robotic welding arms and other complex machines, anthropomorphic robots seem to elude the grasp of contemporary science and engineering. Looking at the current state of humanoid robotics, scientists and engineers are still struggling to design a robot that can traverse a set of stairs effectively, let alone effectively and efficiently simulate the complex range of human movements and behaviors. Technology has event farther to go in regards to designing software which has the capacity to mimic or approximate human intelligence.

While the current state of robotics seems to be a long way off, simple machines do have an impact upon the everyday lives of many people. Complex algorithms and programs are an integral part of the majority of the mundane activities that are a part of everyday life. When someone swipes a credit card, they are feeding information into a host of algorithms and systems which help credit card companies track consumer trends and attempt to prevent fraud. Most people don’t know about this system of information and algorithms because they have never been affected by it. Many people who travel, know it all too well when their credit card’s payment is blocked at a gas station or when trying to pay for lunch. The ever increasing presence of algorithms and autonomous devices in people’s daily lives is well illustrated by the development of autonomous, self-driving cars. Companies like Google are leading the way in this race with innovative designs and real world tests of their products.[i] The response to these public tests and the current state of self-driving car technology has garnered a mixed public response. While many people enjoy the novelty of the idea of a self-driving car and others enjoy the possibly of a safer more predictable driving environment; others, especially those at the National Highway Safety Administration wish to take a more cautious approach.[ii]  The concerns of the NHSA must be taken very seriously if self-driving cars are to be part of the near future since the NHSA helps decide if vehicles are allowed on America’s roads. Not only do self-driving cars produce a host of safety concerns, but they also pose serious questions in regards to ethics and liability law.

Self -driving cars may be a few decades from full completion, but the questions they raise are questions that need to be answered if self-driving cars are to exist in our contemporary society. The most pressing of these questions is what happens when a self-driving car gets into an accident where harm is unavoidable. This question poses serious ethical and legal problems because the car is no longer a passive element, the driver may be ceding most, if not all of their control of the vehicle to the algorithms in the car.  For instance, imagine a self-driving car is turning a corner and cannot detect what is on the other side. As the car rounds the bend, it detects a group of pedestrians crossing the street so close to the car’s projected trajectory that it cannot stop in time to avoid hitting the pedestrians. At this point, the computer has to make a choice, either it can apply the brakes and hit the group of pedestrians, it can swerve into oncoming traffic in an attempt to avoid the pedestrians, or it can swerve into the building complex which blocked its sensors from detecting the pedestrians. The dilemma that faces the car’s driving algorithms eerily matches a classic philosophical problem called the trolley problem where a person is standing on a platform and has the option to save a group of people on the tracks from a runaway trolley. This new trolley problem is complicated by the fact that the driver is not directly responsible for the actions of the machine because they have little control over what the machine actually does. As much as programmers will hate to admit it, these situations must be thought of and controlled for if these vehicles are to be considered safe. Some scholars have already chimed in on this topic…..

(Question and thesis) This research paper’s goal is to bring a philosophical and ethics driven perspective to the issue of what happens when a self- driving car has an accident where harm is unavoidable by viewing the problem through the frameworks of machine and utilitarian ethics. This paper’s core research questions are how should self-driving cars be thought of when handling their ethical implications and what ethical system should govern how a self-driving car behaves in an unavoidable accident. Moving from these questions, this paper will explore the current ethical and legal paradigm in which smart cars exist and, after careful consideration, attempt to show that smart cars are best governed by a utilitarian ethical framework.

The discussion of the legal and ethical implications of self-driving cars begins with a collaborative study entitled Autonomous Vehicles Need Experimental Ethics: Are we Ready for Utilitarian Cars? by Jean-Francois Bonnefon, Azin Shariff, and Lyad Rahwan.[iii] Their article is an excellent starting point for a discussion on the current ethical and legal landscape of self-driving cars because it provides broad overview of the current legal and ethical issues associated with self-driving cars. This overview encompasses the legal issue that self-driving cars are primed to need a new set of regulation to clarify uncertainties regarding the event of unavoidable harm and the current legal debate over how current and future legal structures will handle the ambiguity of liability formed by self-driving cars.[iv] As Bonnefon, Shariff, and Rahwan’s legal discussion winds down, it shifts quickly to a discussion of the ethical issues pertaining to slef driving cars.[v] The core of the ethical issues facing self-driving cars has to do with the allocation of harm in the event of an unavoidable crash. Bonnefon et al contend that self-driving cars will need to have some form of morality programmed into them so that they can effectively allocate the harm of an accident to different parties in the accident.[vi] Discussing the need for self-driving car algorithms to be able to allocate harm, quickly shifts to a discussion of different methods of harm allocation. Bonnefon et al choose to begin this discussion with the utilitarian philosophy that the greatest number of people should be protected in the event of unavoidable harm. A utilitarian allocation of harms means that single pedestrians or drivers may be sacrificed in the event of a crash where a group of pedestrians are crossing the street and a car could not stop in time using its breaks.[vii] After exploring a utilitarian avenue of harm allocation, Bonnefon et al discuss, and advocate for, a more empirical and data driven approach to answering the ethical question of self-driving cars.[viii] To conclude, this paper is one of a few to look at the issues pertaining to self-driving cars from an ethics based perspective. It contains a solid roadmap for further discussion of these issues, beginning with an inspection of the legal dimensions of self-driving cars and latter moving towards an in depth discussion of ethical theories and solutions to the problem of the allocation of harm in accidents.

Looking into the legal landscape of self-driving cars, there are two clear questions; the first is about the effect of self-driving cars on cases of liability law and insurance. This core question looks at who is liable for damages incurred by a self-driving car. One paper inspecting this conundrum is Responsibility for Crashes of Autonomous vehicles by Alexander Hevelke and Julian Nida-Rümelin.[ix] Their work uses a legal-ethics perceptive to inspect the problem of liability and test several possible legal solutions to this problem. Initially, they test the idea that the driver should be responsible for intervening in the event of an accident.[x] Having the driver intervene means that the liability of driving would continue to fall on the driver and any legal proceedings would occur normally. While this seems to be a simple solution, Hevelke and Nida-Rümelin show that this system is problematic because accidents are not always caused in an obvious fashion and autonomous cars would be required to cede control to the operator at a time when there is little or nothing the operator could do.[xi] Their argument relies on the idea that liability implies a duty to take reasonable precautions and actions to minimize harm and there is no way to guarantee that the driver would have enough time to take reasonable action after they regain control of the vehicle.[xii] Another liability solution that Hevelke and Nida-Rümelin inspect is the concept that the driver is responsible for an accident simply by using the vehicle which caused the accident.[xiii] This idea follows the reasoning that every time a person uses a car, there is a risk that said car could get into an accident.[xiv]When someone uses a vehicle, they are taking the risk that they will cause harm to themselves or others due to a car accident.[xv] Hevelke and Nida-Rümelin then construct a dichotomy between two plausible scenarios where this thinking would be applied. Scenario one involves the owner of the car assuming the risk of owning the car itself.[xvi] In such a scenario, each person who owned a self-driving car would have an equal share in the responsibility of any accidents involving a self-driving car, this would lead to a possible system where owners of self-driving cars each paid a tax or mandatory insurance fee.[xvii] For liability scenario number two, the operators of the car at the time of the accident would be held liable.[xviii] Hevelke and Nida-Rümelin believe that this scenario seems to rely on the luck, or lack thereof, of the person using the vehicle.[xix] They liken the situation to accidents which are caused by common driving mistakes like speeding or getting distracted; while many people speed or are distracted while driving every day, few of them get into accidents or get caught by police.[xx] Therefore, the legal system punishes the unlucky whose normal bead driving habits have caused them to have an accident.[xxi] Given these two scenarios, of driver based liability Hevelke and Nida-Rümelin argue that the driver intervention and vehicle owner based forms of liability are both plausible options for allocating liability in self-driving car accidents.

The second issue that self-driving cars will face is the issue of government regulation and safety standards. Discussion of how to define and regulate self-driving cars is currently unfolding through the National Highway Traffic Safety Administration (NHTSA) and state regulations. The NHTSA proposes a cautious set of rules and regulations in their Preliminary Statement of Policy Concerning Automated Vehicles, ranging from limiting their current use to technical testing and providing a simple and timely system for a driver to regain control of their vehicle.[xxii] This policy statement is likely to be a guide for a significant portion of future legislation regulating the design and testing of self-driving cars. Currently, the California Department of motor vehicles is spearheading this discussion by creating a preliminary set of regulations for the widespread use of self-driving cars.[xxiii] Their regulatory scheme covers 4four mains points: the cars must be certified by the manufacturer and testing company, the cars must have licensed drivers, a three year permit system, and a requirement for the protection of personal information and cyber security.[xxiv]  This set of regulations echoes the core sentiment of the NHTSA preliminary policy statement by focusing on safety requirements and a limited scope of operation. In the end, the current regulatory discussion of self-driving cars is dominated by a need to improve safety and foster the development of self-driving car technology.

Shifting gears to the philosophical discussion of self-driving cars. This paper will be discussing how harm should be allocated in the event of an unavoidable crash by using two philosophical systems, machine ethics and utilitarian ethics. Machine ethics is a discipline which focuses on computational systems and their moral implications.[xxv] The central questions of machine ethics focus on how programs and machines are defined as moral entities, ranging from amoral tools to fully moral agents, and how philosophers should ascribe ethical responsibility to these differing moral entities.[xxvi] Computers problematize basic moral structure of free will by creating a system in which basic decision making can be constrained or completely circumvented by automated systems.[xxvii] Moving from the moral implications of automated systems, machine ethics looks towards the future in attempting to construct systems for the programming and “education” of artificial intelligences (AI’s) which may have the capacity to become an autonomous moral agent.[xxviii] An autonomous moral agent (AMA) is a computer or program which can assess its actions in a given situation and calculate its effects on sentient beings to make a morally relevant decision.[xxix] While this aspect will not feature heavily in the discussion of the ethics of self-driving vehicles, the authors and main proponents of AMAs, Wallach and Allan, formulate important arguments for the imposition of ethical codes on machines and the classification of ethical machines.[xxx] Machine ethics is critical to any discussion of self-driving vehicles since self-driving cars are, by definition, taking control from human beings and putting it in the hands of computer algorithms, which is directly within the purview of this type of ethical philosophy. Thus, machine ethics plays an integral role by providing a theoretical framework for self-driving cars to become part of the landscape of ethical philosophy.

While machine ethics integrates self-driving cars into the realm of philosophical discussion, it does not provide any strong normative claims which would guide a discussion of how self-driving cars should be programmed. To that end, utilitarianism, as a normative theory, will be applied to self-driving cars to construct a normative framework for their programming and construction. Utilitarianism is a normative ethical theory, which means it attempts to create a system which makes a distinction between right and wrong on certain grounds. John Stewart Mill outlined the basic elements of Utilitarianism is his 1861 book Utilitarianism. [xxxi] The core of Mill’s utilitarian theory is that humans should strive to generate the greatest amount of happiness possible.[xxxii] This, “greatest happiness principle” is derived from an empirical argument that because all people desire happiness, that it must be a goal for all people.[xxxiii] Since happiness is a universal human goal, we are ethically obligated to work towards making the highest number of people as happy as possible.[xxxiv] Mills line of reasoning can generate some contentious discussion and many scholars have attempted to refute or discredit Mill’s argument.[xxxv] Even in Utilitarianism Mill must spend some time defending his theory from detractors. In his defense of a critique that utilitarianism promotes an obsession over purely physical acts of pleasure, Mill makes an important distinction between carnal, lower, pleasures (things like eating and sex) and higher pleasures, like listening to music or admiring art, are superior pleasures to be sought and preferred over lower pleasures.[xxxvi] This defense is important because it helps distinguish utilitarianism as a philosophy which can be refined and focused on specific issues, while maintaining its core elements. Another important utilitarian thinker, is Peter Singer. His work, entitled Practical Ethics expands on Utilitarian thought by asking how human equality can be viewed in light of utilitarian ethical theory.[xxxvii] For Singer, equality is a multi-faceted term which has different implications depending on what element of equality someone is looking at.[xxxviii] For instance, equal consideration of a person’s interests implies that each person is equally entitled to care and that those whose need is grater, or a greater number of people with the same need, take precedent over someone with a less pressing need.[xxxix] Equal consideration of interests is radically different from basic equal treatment which implies giving everyone the same treatment regardless of their situation.[xl] As a tangible example of these two types of inequality, imagine two people who are hungry, one of them has not eaten for almost three days, the other is hungry because it is lunch time. If there is only enough food for one person, who should be given the food? An equal treatment philosophy would suggest dividing the food in half, or perhaps flipping a coin, to see who gets the food. An equal consideration of the two persons’ interests would see that the person who has not eaten in three days needs the food because they have a more pressing need to be fed, when compared to someone who is hungry for lunch. The difference between equal treatment and equal consideration of interests is a very important distinction in contemporary ethics in that it essentially allows for cost-benefit analysis of moral decisions and calls for people who are significantly well off, especially people in the United States, to assist those who struggle to meet their basic needs. Additionally, this treatment vs consideration discussion factors heavily into this paper’s utilitarian ethical argument.

Defining Self driving Cars

Moving from a general overview of self-driving cars, our focus will turn to the first part of the questions at hand. Mainly, how should self-driving cars be defined in terms of their moral capacity? This question is critical to answering the more pressing questions because defining a self-driving car’s capacities or lack thereof will be a key determinant of what ethical theory said cars would be capable of following. This is important because most ethical theories require a rational agent to be morally responsible for their actions. For instance, most moral philosophers would not hold a small child or developmentally disabled person responsible for small moral infractions. Similarly, people don’t put cars on trial for killing someone in a car accident. This question of moral responsibility balloons into a question of definition as many moral philosophers take stands on how and when people become morally responsible for their actions.

For self-driving cars, this question becomes all the more potent as machine ethical theory will prescribe different moral rules and capacities based on how these machines can be classified. Separating machines based on their ethical capacities requires a different set of rule than most moral philosophers would use to identify human moral capacities and development. Wallach and Allen outline an excellent system for determining the ethical capacities of machines and programs in their book Moral Machines: Teaching Robots Right from Wrong.[xli] Their system relies on the formation of a two part scale which measures machines on their ethical sensitivity, also known as their capacity to detect ethical dilemmas and apply ethical theories, and a machine’s autonomy.[xlii] They see machines as falling into four main categories: amorality, operational morality, functional morality, and full moral agency.[xliii] Amoral machines lack any real ethical sensitivity or autonomy in their operation, a can opener for instance has no ethical sensitivity or autonomy in its use. Next, operational morality occurs when machines are designed with a moral purpose in mind, for instance a firearm with a built in safety lock is a device with operational morality. Functional morality takes operational morality one step further. Machines with functional morality are devices with sufficiently high autonomy or ethical sensitivity that they are beyond the realm of operational morality. For instance, a plane’s autopilot has a significant amount of control over the operation of a plane, yet there are specific guidelines and limits to the autopilot’s operation which helps secure the safety and comfort of the passengers inside the plane.[xliv] Thus, the plane has functional morality because its high degree of autonomy moves it beyond the realm of operational morality. Another way to think of the difference between operational and functional morality is to outline the difference between parental guidance software, and MedEthEx, an ethical guidance program for medical providers.[xlv] Comparing these two software systems, the parental guidance software exhibits operational morality by blocking or restricting the websites a child can visit and view, with no regard to the actual content of the sight or the reasoning behind why the sight was blocked. This simple form of moral programming exists in contrast to the complexities of MedEthEx, which is designed to assist physicians navigate the complexities of difficult medical ethical questions.[xlvi] In this example, the contrasting value is ethical sensitivity, MedEthEx exhibits functional morality because it can guide a physician through ethical gray areas like when to perform an abortion, or which medical procedure to recommend. At the same time, parental control software is merely concerned with a simple set of parameters, mainly “good” and “bad” websites, regardless of the content of the website or the reason for someone visiting it. Theoretically, a parent could use a parental control program to block sights which they dislike for personal or political reasons. The parental control software and the child have no way of knowing why the content was blocked, merely that the website was on the list of unwanted websites. This lack of ethical sensitivity places parental control software squarely in the realm of operational morality, while MedEthEx’s capacity for ethical sensitivity places it into the level of functional morality.

Moving beyond operational and functional morality, there is a final, and mostly theoretical, stage of machine morality: a fully autonomous moral agent (AMA).[xlvii]  The autonomous moral agent is the holy grail of machine ethics and computer science. Full AMAs would be capable of making moral decisions without help or input from any human, they would be fully moralized AI with the capacity to defend and take responsibility for their actions.[xlviii] Most, if not all, AMAs seen in modern society are the product of fiction. Robots like Sunny from the movie I Robot or Bender Bending Rodriguez, the chain smoking, serial felon, robot on Futurama are examples of AMAs.[xlix] While their actions may not be seen as positive or permissible, they are solely morally responsible for their actions and are in charge of making decisions for themselves. Although AMAs of the caliber seen in science fiction are years away, much of the work in the field of Machine ethics is focused on how to build and program AMAs.   Taking a step backward from AMAs, self-driving cars are most likely to find themselves in the realm of either operational or functional morality.

[i] “Google Self-Driving Car Project.” Google Self-Driving Car Project. Accessed March 20, 2016. http://www.google.com/selfdrivingcar/.

[ii] United States. Department of Transportation. National Highway Traffic Safety Administration. Preliminary Statement of Policy Concerning Automated Vehicles.

[iii] Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?.”arXiv preprint  arXiv:1510.03346 (2015).

[iv] Ibid

[v] Ibid

[vi] Ibid

[vii] Ibid

[viii] Ibid

[ix] Hevelke, Alexander, and Julian Nida-Rümelin. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Sci Eng Ethics Science and Engineering Ethics 21, no. 3 (2014): 619-30.

[x] Ibid

[xi] Ibid

[xii] Ibid

[xiii] Ibid

[xiv] Ibid

[xv] Ibid

[xvi] Ibid

[xvii] Ibid

[xviii] Ibid

[xix] Ibid

[xx] Ibid

[xxi] Ibid

[xxii] United States. Department of Transportation. National Highway Traffic Safety Administration. Preliminary Statement of Policy Concerning Automated Vehicles.

[xxiii] “DMV Releases Draft Requirements for Public Deployment of Autonomous Vehicles.” DMV Releases Draft Requirements for Public Deployment of Autonomous Vehicles. Accessed March 30, 2016. http://dmv.ca.gov/portal/dmv/detail/pubs/newsrel/newsrel15/2015_63.

[xxiv] Ibid

[xxv] Noorman, Merel. “Computing and Moral Responsibility.” Stanford University. 2012. Accessed March 30, 2016. http://plato.stanford.edu/entries/computing-responsibility/#RetConMorRes.

[xxvi] Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009.

[xxvii] Noorman, Merel. “Computing and Moral Responsibility.” Stanford University. 2012. Accessed March 30, 2016. http://plato.stanford.edu/entries/computing-responsibility/#RetConMorRes.

[xxviii]Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009.

[xxix] Ibid

[xxx] Ibid

[xxxi] Wilson, Fred. “John Stuart Mill.” Stanford University. 2002. Accessed April 01, 2016. http://plato.stanford.edu/entries/mill/#MorUti.

[xxxii] Mill, John Stuart, and George Sher. Utilitarianism. Indianapolis: Hackett Pub., 2001.

[xxxiii] Ibid

[xxxiv] Ibid

[xxxv] Wilson, Fred. “John Stuart Mill.” Stanford University. 2002. Accessed April 01, 2016. http://plato.stanford.edu/entries/mill/#MorUti.

[xxxvi] Mill, John Stuart, and George Sher. Utilitarianism. Indianapolis: Hackett Pub., 2001.

[xxxvii] Singer, Peter. Practical Ethics. Cambridge: Cambridge University Press, 1979.

[xxxviii] Ibid

[xxxix]Ibid

[xl] Ibid

[xli] Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009.

[xlii] Ibid

[xliii] Ibid

[xliv] Ibid

[xlv] Anderson, Michael, Susan Leigh Anderson, and Chris Armen. “MedEthEx: A Prototype Medical Ethics Advisor.” American Association for Artificial Intelligence, 2006, 1759-765. Accessed April 5, 2016.

[xlvi] Ibid

[xlvii] Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009.

[xlviii] Ibid

[xlix] I, Robot. Performed by Will Smith. Beverly Hills, CA: 20th Century Fox, 2004. , Futurama. Beverly Hills, CA: Twentieth Century Fox Home Entertaient, 2003.