User acceptance and ethics of ITS Chapter | 7 89
passengers, people from the first group are more hesitant on putting the pedes-
trian in risk even if he stepped illegally in traffic. Another difference was that
the first group was more preferable to choose a younger life to save instead of
an older one, compared to the second group. People from countries with stron-
ger institutions (e.g., Finland and Japan) tend to choose to sacrifice jaywalkers
instead of their passengers or other drivers than people in countries with weaker
institutions, (e.g., Pakistan or Nigeria). Finally, the scenarios that examined the
correlation of decisions with the economic inequality of a country revealed that
the decision in the case of countries with a significant economic gap inclines
more toward sacrificing a poor person in order to save a rich one. Although not
all the findings can be used as an input to a machine learning algorithm that
takes moral decisions in the fully autonomous vehicle case, they are very inter-
esting since they reveal the ethical variety around the world that will definitely
affect the acceptance of ITS.
The problem behind such life-critical decisions is that fully AVs rely on
the algorithms they run and this on the moral system that their developers pro-
grammed into them. In an attempt to begin the discussion around the moral
decisions that fully automated vehicles have to make (as the National Highway
Traffic Safety Administration imposed for Level-5 autonomous vehicles), Mer-
cedes-Benz announced in 2016 that their algorithm would prioritize passenger
safety, but soon took this statement back, revealing the gap that still exists in
the discussion of the AV moral and ethical decisions (Shariff, Rahwan, & Bon-
nefon, 2016). A moral system that is not properly justified can only raise more
dilemmas, which decrease trust for autonomous vehicles.
Since the decisions of an AV are dependent on the code or the machine
learning models that it employs, it is important to open their logic to the public
in order to increase trust. It is also important to define an accountability policy
that decides on who is responsible for a bad decision that results in a deadly
crash and at what degree the optimization of human casualties is acceptable. All
these are subject to further research, which should also examine the possibility
to personalize driving decisions so that they match the ethical and moral per-
ceptions of the driver who will be accountable of any decision of the automated
driving system.
References
Acheampong, R.A., Thomoupolos, N., Marten, K., Beyazıt, E., Cugurullo, F., & Dusparic, I.
(2018). Literature review on the social challenges of autonomous transport. STSM Report for
COST Action CA16222” Wider Impacts and Scenario Evaluation of Autonomous and Con-
nected Transport (WISE-ACT).
Adell, E., Várhelyi, A., & Nilsson, L. (2018). Modelling acceptance of driver assistance systems:
application of the unified theory of acceptance and use of technology. In Driver Acceptance of
New Technology (pp. 23–34). CRC Press. Boca Raton, Florida, US.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine
experiment. Nature, 563 (7729), 59.