As technology advances at an astounding rate, rarely do people consider if we should rather if we can. Often, innovation happens for the sake of innovation without a reason or just cause. But as technology becomes smarter and more entwined with everyday life, questions of ethics concerning technology are starting to become important topics of discussion. One of the main points of contention happen to fall on the issue of autonomous cars. The issue is simple yet nuanced: should we trust cars to drive themselves?
The main argument for autonomous cars is that statistically, autonomous cars are safer than human drivers. When the first fatal crash because of Tesla’s autopilot system occurred, Tesla was quick to cite that on average, there is a fatality every 94 million miles while the first autopilot system error resulting in death occurred in some 130 million miles. And this is with a relatively young autopilot software and system. As the technology for autonomous cars continue to improve and advance, the likelihood of fatal crashes should plummet. In a purely utilitarian standpoint, this would greatly increase the net happiness in the world. The lives that could be saved by the worldwide implementation of autonomous vehicles would certainly be significant.
But safety shouldn’t be the only factor when deciding if autonomous cars are ethical. Even though machines are less prone to error than humans, is it ethical at all to leave human lives in the hands of unfeeling machines even if it is statistically safer? After all, autonomous vehicles are designed to follow strict sets of rules in the form of an algorithm. But there will always be situations where rules should be broken for the greater good. For example, an autonomous vehicle should never break the speed limit by law. But what if the people inside the car need to speed for a medical emergency.
Secondly, how should engineers program cars to minimize risk? A commonly cited issue is the comparison of the trolley problem to autonomous cars. The trolley problem simply asks that if there is a runaway train that will kill five people, would you pull a lever to divert the train which would then go on to kill one person. Knowing nothing about the people who would die, many people would decide to pull the lever as saving five lives in the expense of one seems like the right choice. But a variation of the problem then asks what if you would stop the train from killing five people by pushing a fat man onto the track. Even though the question is similar, the answers become more varied. There is a difference between being directly involved in the act of killing rather than flipping a switch to kill is drastic for a lot of people. How would someone account for this in a vehicle design? If a car was put into this scenario, how should its programming handle the situation? Should cars prioritize the safety of its passengers over others? These hard questions must be answered by engineers programming the car. This takes all the decision away from the passenger and places it into the cold hands of the vehicle.
The answer isn’t so black and white as most people would like. Because cars are so prevalent in all parts of society, this issue affects almost everyone no matter of racial or social standing. Out of the three tests, a utilitarian test seems the most apt. A virtue test doesn’t apply well since the issue affects everybody in different ways. There is not really any virtue related to the invention of autonomous cars. A justice test wouldn’t be helpful as the arguments do not pertain to the distribution of burdens and benefits. Even when applying a utilitarian test, the issue still is opaque. Autonomous cars would certainly reduce the amount of accidents which increases global happiness. Additionally, engineers could program the cars to simply try to save the most people. So, in the case of the trolley problem, the car would save the five at the expense of one. But there are also many negatives in utilitarian happiness. Humans as a race value freedom highly. The freedom to speed and drive in a not so safe manner is still a freedom that many people value. Taking this freedom away would certainly be a negative in happiness. It really comes down to what we value as a society more. We must ask ourselves if it is worth it to give up freedoms and personal decisions in safety for increased general safety. But a simple solution results from this conclusion. If someone values general safety more, they can get an autonomous car. A person who values freedom and personal decision more then can drive a normal car. While the integration of self-driving cars with human drivers isn’t ideal, this lets both parties decide which they value more.
- [ 1 ] Lin, Patrick. “The Ethics of Autonomous Cars.” The Atlantic, Atlantic Media Company, 8 Oct. 2013, www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/.
- [ 2 ] University, Stanford. “Stanford Professors Discuss Ethics Involving Driverless Cars.” Stanford News, Stanford, 1 Sept. 2017, news.stanford.edu/2017/05/22/stanford-scholars-researchers-discuss-key-ethical-questions-self-driving-cars-present/.
- [ 3 ] Yadron, Danny, and Dan Tynan. “Tesla Driver Dies in First Fatal Crash While Using Autopilot Mode.” The Guardian, Guardian News and Media, 30 June 2016, www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.