88 PART | II ITS users
brings and the social influence of this technology, have a major impact on their
intention to adopt it.
7.3 Ethical issues related to highly automated vehicles
The impressive performance of artificial intelligence and machine learning in
several tasks, such as computer vision and state-space search in which they
match or surpass human performance, has inflated expectations for the ma-
chines’ ability to make decisions. At the same time it raised concerns about
the cases where moral decisions have to be made and challenges for the ethi-
cal behavior of machines. For the moment, when self-driving vehicles detect a
possible danger or a risky condition, they rely on the driver to resolve it. But
even the driver that has to decide between steering the wheel in order to avoid
a pedestrian and putting himself and the passengers in danger and keeping the
passengers safe but putting the pedestrian in risk, can hardly be confident about
his decision. In the case of a fully automated vehicle, such moral decisions must
be taken automatically, thus they need humans to agree on a universal moral
code. This is an almost impossible task that requires more than 2 million people
to be surveyed in order to reach a valid consensus.
Although self-driving cars seem to be safer than human-driven cars in nor-
mal conditions, the difficulty still resides on the unavoidable accidents. On
those accidents that still a self-driving car relies on the human to handle. In such
stressful cases, that even the driver does not have enough time to respond, the
fully-autonomous vehicle has to take the lead and react reasonably. Even in this
case, where all the crash avoidance alternatives have been examined and the
AV has to choose between two or more crash situations the autonomous vehicle
has to make moral decisions, for which it is not yet been prepared. Although
machine ethics already provide a moral basis for AVs (e.g., self-driving cars can
break the law in order to save a life) there are still several complicated scenarios
to be resolved.
The survey of Awad et al. (2018) on more than 130 countries (with at least
100 respondents each) revealed a large variety of moral principles across coun-
tries. The survey questioned drivers’ decisions in 13 hard moral dilemmas that
always result in someone's death. The participants had to choose between the
casualties in each case: for example, between a young or an elder person, a sin-
gle person or a group of people, a kid or a pregnant woman, etc. Despite the rar-
ity of such dilemmas in a driver's life the aim of the study was to understand the
variety of ethical and moral backgrounds and highlight the difficulty of setting
up a universal moral code that machines would follow. A clustering of responses
revealed three main groups. One that one included mostly North Americans and
Europeans with a Christian religious origin and one that included East Asian
population, mostly Islamic or Confucian. The last group mostly consisted of
South Americans, French, and people from French colonies. It is indicative that
in the scenario that sets the dilemma of choosing between the pedestrians or the