Forbes Asia — May 2017

(coco) #1
78 | FORBES ASIA MAY 2017

O


n March 11, 2011, a massive earthquake ripped
across the northeastern coast of Japan. Terrible as it
was, the tsunami that followed led to an even more
terrifying event: the crippling of the Fukushima Dai-
ichi Nuclear Power Plant. The release of radiation,
meltdowns and evacuations that followed prompted a massive in-
vestigation into what went wrong.
I was appointed chief technology officer for the Fukushima Nu-
clear Accident Independent Investigation Commission (NAIIC), an
ad hoc body reporting to the national legislature. The job gave me
a unique opportunity to examine this catastrophe in detail and see
its multiple causes. Looking at the long chain of errors and misjudg-
ments behind it made me consider security and risk management.
How could the risks have been generally discounted and so ineptly
managed? What steps were taken after the accident to limit damage
and what systems were established to prevent a similar situation?
After the disaster, my perspective had changed, and I began to
view the whole field of IT security through the broader lens of risk
management. I realized that talking about security from a pure-
ly technical perspective misses the big picture. My primary task had
been to design a new, secure communication system and collabo-
ration platform for the NAIIC team of close to 150 personnel. My
background in entrepreneurship and cybersecurity would certain-
ly be tested.
The NAIIC was Japan’s very first independent accident inves-
tigation authority established by the legislature. It was met with a
lot of skepticism, and I had to push bureaucrats beyond their com-
fort zone to create a new entity and new ways of doing things. One
of my biggest goals was maintaining the cybersecurity of the inves-
tigation with outsiders who were suddenly thrown into this activity,
where IT literacy was all over the map and there was no time for ed-
ucation and security training. To speed the workflow, I had to allow
people to use familiar tools and applications such as printing, email-
ing and file sharing. I also had to assume we were going to be criti-
cized by media and other interests and that people would naturally
be careless and misplace USB sticks, lose laptops and forget cell-
phones. While investigating a disaster, I was trying to create a resil-
ient system from scratch on a very limited budget. Ultimately, we
were able to create a very secure working environment that used ev-
erything from digital certificates to encrypted laptops, while main-
taining a high level of efficiency. Even more importantly, when data-
loss incidents occurred, people quickly reported them—this helped

create an atmosphere of trust, which strengthened our security.
One moral of this story is that it’s possible to do security on the
cheap without sacrificing usability. But implementing IT security is
not enough. It misses the critical component of risk management.
Real security lies in maximizing our field of view and expanding our
thinking.
Not being a nuclear safety or risk management expert, I also
tried to contribute to the NAIIC by studying historically significant
disasters as varied as the sinking of the Titanic, the Challenger explo-
sion, nuclear crises at Three Mile Island and Chernobyl, and many
others. In the Titanic sinking, for instance, a raft of failures apart
from collision with an iceberg—from inferior construction methods
to equipment shortages to management and regulatory errors—has
been blamed for the loss of over 1,500 lives. But what I realized was
that all these catastrophes had one factor in common: all came with
tell-tale signs. Managers had tried to achieve a false level of “perfec-
tion,” in the process losing valuable time and a thorough grasp of the
big picture. In each case the relevant engineers saw the potential for
problems and warned their superiors, who in turn dismissed warn-
ings due to normalcy bias.
Normalcy bias has been described in studies of disaster psychol-
ogy as an unwillingness to recognize the urgency of a crisis or ac-
knowledge that a crisis could happen. It can manifest in situations of
survival as well as planning for the worst. In the September 11, 2001
terrorist attacks in the U.S., 90% of survivors did not immediate-
ly evacuate the World Trade Center after it was struck by planes, in-
stead choosing to save their work, shut down their computers or go
to the washroom, according to a three-year academic study.
In the case of Fukushima, warnings had been issued and ig-
nored, and a catastrophe ensued. The commission concluded that
the plant operator, the government and regulatory bodies “all failed
to correctly develop the most basic safety requirements” for such
a disaster. What Fukushima taught me, and what I continue to re-
mind myself, is that in terms of cybersecurity, Japan does not have a
monopoly on this type of culture. People will always take shortcuts.
That’s why we must take to heart the a priori precepts of risk man-
agement: (a) people make mistakes, (b) machines eventually break
down and (c) accidents inevitably happen.
With that in mind, some lessons we all must learn as we grapple
with how to prevent cyberattacks and manage risk in security:
We can’t spend forever trying to make things perfect. We waste
a lot of time and miss the forest for the trees. Murphy’s Law is a con-

After Fukushima


BY WILLIAM H. SAITO

How a disaster taught a great deal about risk management in cybersecurity.


TECHNOLOGY MANAGING CATASTROPHES

Free download pdf