Impact of ITS advances on the industry Chapter | 9 111
design principles to serve and benefit humans and society as a whole, and to
enhance human capabilities by the technology advances achieved toward highly
automated and autonomous systems (HumanE AI Vision, Society 5.0 Vision).
Compliance checking for decision systems has therefore to follow fundamental
principles: technical requirements (safety, security, privacy, reliability, sustain-
ability); human-oriented AI capabilities with a deep understanding of complex
sociotechnical systems and ethical considerations.
Decision systems based on AI of the third generation are not recommended
in functional safety standards at the time of writing. Particular architectures,
which restrict AI-configurations to make an AI-based system safer and more
predictable (no continuous machine learning, bias-free training data, guarded/
monitored AI components to block unintended behavior, static neural networks
validated as “black box” element, adapted safety concepts for AI & ML) are
studied and ongoing research. “Big Data” collected by (IoT) devices over time
will form an essential part of AI to guide and validate decision-based systems.
Explainability and accountability of machine learning methods are prerequisites
to building a validation and verification environment for compliance testing of
decision systems according to the fundamental principles mentioned earlier.
In the specific AI-Standardization Group ISO/IEC JTC1 SC42, there is on-
going work on AI and decision taking on a general level which should be taken
into account and gives important indications what to consider for compliance
checking of decision systems against high-level goals. This covers primarily
safety, security and privacy issues of applied AI and decision making. Several
organizations have set up ethical principles for future decision systems control-
ling highly automated/autonomous systems in human environments in a col-
laborative or noncollaborative manner. Particularly the German Ethics Report
and the EC Ethics Guidelines set up generic principles to follow and a decision
system must conform to the “Key Guidance for Ensuring Ethical Purpose”.
ITS seeks to carefully track, support and influence as far as possible the
definition and standardization of AI regulations, having clearly in focus avoid-
ing centralizing and dramatically expanding regulation. ITS requires standards
(considering safety, cybersecurity, reliability, availability, maintainability) that
can be practically applied, providing guidance and enabling type approval. The
AI-related regulations so far exist as additions to hardware and software prod-
ucts, and thus rely on the existing legislative frameworks, which lack of a con-
crete and detailed plan for handling AI in ITS. For example, whereas the food
and drug administration provides a concrete framework for drug regulation, the
US National Highway Traffic Safety Administration as part of the department
of transportation issues general guidance about the operation of autonomous
vehicles without defining every detail. Then it remains on national authorities
to provide implementation details per case and this is mainly done for not in-
hibiting growth. Strict safety guidelines and consumers’ and drivers’ privacy are
driving the development of AI solutions in ITS. The need for privacy and trans-
parency at the same time, as it emerges from the need of insurance companies