Site icon Richard's blog

Self Driving Cars and ethics

In recent months we have seen articles about self driving cars and ethics. The question the research asks is whether to kill, a group of people, the driver of the car or others. The question is perfectly logical if you have never driven a car. If you have never driven a car then you believe that accidents may be unavoidable.

So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.


When you learn to drive in Switzerland you need to spend a number of hours being made aware of the dangers of the road. During these lessons you learn about where to look, about how to gauge people’s age, attention and intention. You are trained to look at their eyes and their body language, to anticipate their actions.

Road awareness also progresses to road conditions. Is it raining, is it snowing, are you in the middle of a drought and other such questions. The reason these questions are asked is that the stopping distance, risk of skidding and other dangers increase.

We also find that their are zones where cars are meant to be particularly attentive and may be asked to drive slower. Schools, pedestrian zones and other areas expect drivers to slow down and be able to stop instantly in case there is the need.

In light of everything that we are expected to consider as human drivers it makes sense to provide autonomous cars with the same knowledge and intelligence. If there has been a drought for six months and rain is falling then the car’s software should anticipate the longer stopping distances and the chance of skidding. The car should slow down.

When they decide on the ethics of self driving cars they should spend time writing code that is friendly and safe to cyclists using the roads. The software should be written to slow down and wait for a safe place to overtake. The software should be written to overtake from a safe distance. The same is true of cars near horses and other animals. You don’t want self driving cars to scare animals.

As a driver it is expected of me to anticipate all dangers and threats to life. It is for me to decide to drive at or below the speed limit. It is for me to assess the risk of someone crossing the road, of someone turning left on a roundabout and more. If I was to make a mistake that would cost someone their life then it would be my fault, according to insurance because i had failed to adapt to the situation. Self driving cars should be programmed to anticipate all problems and issues and be held to the same standard as humans.

These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

Self driving cars should in no circumstance cause death. They should never face the decision. If they face the decision then the programmers have not spent enough time consulting with drivers, emergency services and road safety professionals. Death should be eliminated from the equation.

Exit mobile version