Contents
How should a robot car react if it has to decide who survives in an accident? A new algorithm should help.
The big promise of autonomous driving is that it will mean fewer accidents on the roads. An estimated 90 percent of all accidents are due to human error: excessive speed, alcohol and drug consumption or simply carelessness.
Industry and research are therefore working on increasingly sophisticated sensors and algorithms in order to take the steering wheel out of the hands of the human risk factor. But then does morale fall by the wayside? Or can ethical behavior be programmed into a machine?
Programmed morality
Technology ethicist Franziska Poszler and software engineer Maximilian Geisslinger from the Technical University of Munich are convinced that this is essential for social acceptance of autonomous vehicles. The software in our cars must be able to react ethically in unpredictable situations.
The big question is: How should a car behave if an accident can no longer be avoided? Is it intended to protect the occupants or other road users? Old or young people? Researchers at the Massachusetts Institute of Technology (MIT) asked thousands of such questions in the “Moral Machine” experiment.
The result: There are clear cultural differences. In Western countries, children are more likely to be protected, while in China and Japan, older people are more likely to be protected. The question of whether one person can be sacrificed for the benefit of others is also answered differently depending on the culture.
Discrimination against groups of people is prohibited
The experiment brought awareness to the problem of ethical programming for the first time, emphasizes Poszler. However, it was not crucial when programming their ethical algorithm. The EU Commission published ethics recommendations in 2020 that prohibit discrimination based on age, gender or physical condition.
The Munich researchers’ algorithm is based on these recommendations and makes more balanced decisions than previous algorithms, which often had to make either/or decisions. “Our focus is always on the question: How do we fairly distribute the risks in road traffic? This is a further development of the question of who should be sacrificed,” emphasizes Maximilian Geisslinger.
Fair distribution of risk
The new algorithm continuously assesses the risk to all vehicles and people on the road. For example, if an autonomous vehicle wants to overtake a bicycle and a truck is approaching in the oncoming lane, the ethics algorithm calculates the potential danger for various driving maneuvers.
Based on this, the algorithm then makes an ethical decision, for example to drive slower to let the truck pass before overtaking, or to keep more distance from the cyclist. The aim is to avoid dilemma situations in which a decision has to be made about life and death.
The researchers are currently validating the automated morale in simulations before it is tested on the road next year with the autonomous research vehicle EDGAR. The algorithm with the findings of the study is then available as open source.