"The protection of people has priority over all other considerations of utility" - this is one of the main rules of the Ethics Commission "Automated and Connected Driving" set up by the German government. Automated and connected technology should avoid accidents as much as possible, i.e. prevent critical situations from arising in the first place.
Wieviel Mensch steckt in KI? Ein Kommentar von Prof. Dr. Oliver Mayer.
But how does it behave in a so-called dilemma situation, i.e. when an automated vehicle is faced with the "decision" of having to realize one of two evils that cannot be weighed? Whom should AI primarily protect in a conflict situation: the vehicle's occupants or external road users?
What even happens when AI systems become more intelligent than humans and have their own motives? "Then they would probably make decisions that favor their peers and possibly harm humans" says computer scientist Prof. Dr. Fred Hamker, who researches the functioning of the brain at Chemnitz University of Technology with the aim of developing novel, intelligent, cognitive systems. The scientist rightly sees danger in such a scenario. Until fully automated driving goes into large-scale production, there are still some exciting questions to be answered!