Research Statement Draft #1

Autonomous vehicles, also known as smart cars, represent a unique ethical dilemma generated by the advance of human scientific knowledge and technological prowess. Specifically, autonomous vehicles need to be prepared for possible scenarios where harm, either to the vehicle, the driver, or pedestrians, is inevitable; these can be considered no win situations.  Normally, a driver would have personal agency in and responsibility for their choices in a no win situation, a self-driving vehicle is governed purely through algorithms which are designed to act in certain ways depending on the inputs that the vehicle receives. Thus autonomous vehicles become a rethinking of the classical trolley problem, which asks who we should save in a hypothetical situation in which we have the power to change the course of a runaway trolley which will harm one of two groups of people through various means and with various consequences. I plan to inspect the ethical problems of self-driving cars through the lenses of utilitarianism and applied machine ethics. I will be asking questions like: should autonomous vehicles be programmed in a certain way in no win situations? If so, who should choose the optimal settings? Should these settings be imposed or mandated in some way? What about ceding control to the driver at the last possible second?

Looking beyond these questions, I will be inspecting the current literature in regards to utilitarian ethics as well as literature regarding machine ethics. I also want to know as much as I can about the technical capacities of autonomous vehicles and any current ethical literature concerning them. I will also be drawing comparisons between autonomous vehicles and autonomous weapons platforms in that both are emerging technologies which have a significant capacity to harm which are completely governed by decision algorithms.

12 thoughts on “Research Statement Draft #1

  1. I think this is really great start. I would maybe just slightly elaborate on what some terms are. Just quick definitions of things like utilitarianism and applied machine ethics because I think it would help expand on what your paper is about

  2. The topic is very intriguing and I think you have many great points regarding the ethics of autonomous vehicles. One thing is that I am not exactly clear on what your theoretical framework is. Is utilitarian ethics your theoretical framework?

    • And, as a follow up, why would you select utilitarianism over the various competing ethical systems? Utilitarianism being about “maximizing the good” will require a working definition of “good”. I can’t wait to see the systems you use to answer those.

      • Agree with both comments. Is Utilitarianism your theoretical framework or is it an aspect of your analysis? Same kind of need for clarity as noted above.

  3. As someone who knows absolutely nothing about this topic, this is super interesting Theo! I think this is a great start. A couple suggestions I would give to you is to maybe elaborate a little bit more on your theoretical framework and what that will mean as a lens for your research. Secondly, make sure that your readers who don’t know much on the topic are able to understand some of the vocabulary that you are using. Can’t wait to see how your research progresses, great job!

  4. I think this is a very unique topic, one which I don’t know a lot about. I’m not really sure what some of the terms are and which theoretical framework you are using, so I think it might help if you defined some of the terms so the readers will get a better understanding of what your paper is about.

  5. Theo, this is a great start and it hits on all the key aspects of your research undertaking. But, your colleagues are correct to ask you for more clarification about the terms, including more substantive definitions. Same for the theoretical framework that is implied more than sketched out here.
    I would also recommend some discussion of the “Why”–why this is important to consider, understand. Where is this going to end up in our society? Are there policy proposals that are engaging with these questions? Finally, wondering where some of this is being imagined? Does it show up in speculative fiction? Film? Television? Popular culture may be an asset for your considerations. (Saw a perspective on these questions recently on an episode of “Elementary.” Fairly mainstream, but with some useful diversions from your stream of thought.)

    • Thanks for the feedback. I’ve been toying around with discussing Asimov’s three laws of robotics, especially in regards to the problem of autonomous weapons platforms. I would definitely be open to incorporating fictional examples and discussions into my work.

  6. Theo, I think this is going to be a fascinating paper. I agree with the above comments to be clear about the framework and defining the terms. Furthermore, as Austin pointed out why utilitarian ethics? I think the comparison to self-automated military technology will be an interesting comparison.

  7. Theo,

    Are you still considering incorporating information about opt-in versus opt-out programs? How could/would settings be changed or taken into consideration if someone or a group of people are at fault and they are the individuals saved?

  8. I am a little confused also by some terms. Are you going to compare two different systems then as a methodology? If this is the case then would there be a different paradigm for different variations for research and development, and production schedules. Just curious because I use to work with military contracts and it was a confusing industry.

Leave a Reply

Your email address will not be published. Required fields are marked *