The Ethics Of Self-Driving Cars:
Can We Be Rational With New Technology?

March 20th, 2018

By Michael Solomon, 10x Management Co-Founder 

Two days ago, a self-driving Uber SUV struck and killed a female pedestrian in Arizona. It is the first fatality involving a pedestrian and a fully autonomous vehicle. The accident is tragic and raises an important question: who’s to blame? When a human driver strikes a pedestrian, blame can usually be placed on one party or the other, or both. We’re used to those situations. But who’s to blame in this situation?

  • Is it self-driving technology that’s just not ready for real-world tests?
  • Is it Uber for testing autonomous vehicles before they’re ready?
  • Is it the safety driver who was in the car and meant to be monitoring the vehicle?
  • Is it the pedestrian who was crossing the street outside a crosswalk and perhaps not paying close attention?
  • Is it the state of Arizona for allowing autonomous vehicle tests?

If we’re blaming self-driving technology for the death of this pedestrian, then it seems like a trivial place to stop:

  • Why not blame Karl Benz, who is largely credited with inventing the automobile in Germany in 1885?
  • Why not blame Henry Ford, who facilitated mass production of cars with the assembly line?
  • Why not blame American inventor William Bertelsen, who envisioned and championed self-driving vehicles in the 1960s?
  • Why not blame me for allowing my government to pass this legislation?
  • Why not blame the pedestrian’s parents for raising someone who would cross the street outside a crosswalk or without sufficient vigilance to stay safe?

I don’t mean to be insensitive, but technological progress happens all the time, and we’d be foolish to halt our progress because accidents happen. It’s concerning to me that Uber has stopped all autonomous vehicle testing because one pedestrian was killed. What if, because of those autonomous vehicles that were on the roads, lives were saved, because they are, in fact, safer/better than humans?

We need to have an honest conversation about technology and the ethical ramifications of its integration into society. If self-driving cars kill 100 pedestrians per year, and human drivers kill six thousand pedestrians per year in the United States (that’s a fact), then clearly self-driving cars are a net positive for humanity, even though they are not perfect.

We can’t play the blame game. If we did, then we are truly ALL responsible for this death because we all take part in and contribute to technological progress. If we lived in a world where everyone took 100% responsibility for their actions, we would live in a much more empowered world.

I understand that it feels like a grey zone; people don’t know how to react to an autonomous machine killing someone. Unlike a human driver, autonomous vehicles feel no guilt and have no concept that anything bad happened. When humans kill people, we have trials. When dogs kill people, we put them down. We have recourse, we know what to do. But this is new for us. And we don’t know how to react.

My advice in this situation is that we need to be rational. If self-driving cars kill 100 pedestrians per year but save five thousand pedestrians per year, then clearly they are beneficial for us. Let’s get away from our emotions, and away from our fears of autonomous machines. It’s simple logic – self-driving cars will save many more people than they will kill. They will not be perfect, but they don’t need to be perfect. They just have to be better than human drivers for us to adopt them into society. One pedestrian was killed by an autonomous vehicle on Sunday. 16 people were killed by human drivers today.

If you like this article, you might enjoy reading Why Getting A Head Start On Automation Is Necessary