Research Statement Draft #1

Autonomous vehicles, also known as smart cars, represent a unique ethical dilemma generated by the advance of human scientific knowledge and technological prowess. Specifically, autonomous vehicles need to be prepared for possible scenarios where harm, either to the vehicle, the driver, or pedestrians, is inevitable; these can be considered no win situations.  Normally, a driver would have personal agency in and responsibility for their choices in a no win situation, a self-driving vehicle is governed purely through algorithms which are designed to act in certain ways depending on the inputs that the vehicle receives. Thus autonomous vehicles become a rethinking of the classical trolley problem, which asks who we should save in a hypothetical situation in which we have the power to change the course of a runaway trolley which will harm one of two groups of people through various means and with various consequences. I plan to inspect the ethical problems of self-driving cars through the lenses of utilitarianism and applied machine ethics. I will be asking questions like: should autonomous vehicles be programmed in a certain way in no win situations? If so, who should choose the optimal settings? Should these settings be imposed or mandated in some way? What about ceding control to the driver at the last possible second?

Looking beyond these questions, I will be inspecting the current literature in regards to utilitarian ethics as well as literature regarding machine ethics. I also want to know as much as I can about the technical capacities of autonomous vehicles and any current ethical literature concerning them. I will also be drawing comparisons between autonomous vehicles and autonomous weapons platforms in that both are emerging technologies which have a significant capacity to harm which are completely governed by decision algorithms.