It is this sort of scenario that’s been at the
heart of many science fiction narratives over
the years, from the robot overlords of the
Matrix trilogy to the quiet, calm and psychotic
computer aboard Spacecraft Discovery One in
2001: A Space Odyssey. But the fact is that this
once disturbingly dystopian vision is actually
becoming reality, as genuine AI begins to creep
into everyday life.
Already, from Boeing’s troubles to the deadly
self-driving Uber crash in Arizona last year,
we are seeing flaws in these systems that cost
lives. Moreover, as traditional programming
makes way for AI, the human input involved
begins to diminish.
AI effectively continues learning, applying
the same logic of its programming to
scenarios it wasn’t required to consider during
programming. But would even that have saved
the Boeing flights? Would AI recognize the
potential that its own systems might be at fault,
thereby handing control to a human who it
believes is not in control of an aircraft?
INCREASING COMPLEXITY AT A COST
As technology progresses, this question
will continue to crop up in different sectors,
scenarios, and environments. Already,
comparisons have been made between
similarities existing in the Uber crash and
Boeing disasters and those in malfunctions
that led to the Deepwater Horizon oil
spill in 2010. Essentially, in every instance,
something unexpected occurred within
a control system, and neither human nor
machine knew how to react.