Tolerant Networking (DTN) that is able to work seamlessly during delays and losses of con-
nectivity. DTN is designed to incorporate cryptography and key management, signifying that
space agencies are taking steps to make sure security is built into the design of space commu-
nication protocols.
NASA isn’t the only player in the area of interspace communication. Elon Musk’s SpaceX
is planning on launching a network of low-orbit satellites to provide global Internet access.
SpaceX plans to extend this network of satellites to include communications with satellites on
Mars when its mission to send humans to Mars comes to fruition.
It is easy to imagine how important communication is going to be to enable critical and
risky space missions. Rockets and satellites (and other objects relevant to the mission) are
“things” that are going to be available and accessible on space communication infrastructure.
As NASA and SpaceX move forward with deploying a greater number of satellites to facilitate
networks in space, their architecture will be a ripe target for many threat agents. Terrorists
and competing nation-states are likely to attempt to exploit vulnerabilities that may be present
in network protocols to steal intellectual property and to disrupt space missions. Such security
breaches could result in the loss of human lives or even the failure of humankind to populate
other planets. This is going to be an important area for security researchers to contribute to in
order to make sure we are building our space communication infrastructure securely from the
ground up.
The Dangers of Superintelligence
Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park
with Alan Turing, is often quoted discussing the perils of machines achieving greater intelli-
gence than humans:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual
activities of any man however clever. Since the design of machines is one of these intellectual activi-
ties, an ultraintelligent machine could design even better machines; there would then unquestiona-
bly be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first
ultraintelligent machine is the last invention that man need ever make.
Nick Bostrom, author and professor at Oxford, defines superintelligence as “an intellect
that is much smarter than the best human brains in practically every field, including scientific
creativity, general wisdom and social skills.” Bostrom and other leading scientists are worried
that machines capable of superintelligence are going to be difficult to control and that they
may have the ability to take over the world and eliminate humankind.
Well-known intellectuals and leaders such as Bill Gates are worried about super intelli-
gence too:
THE DANGERS OF SUPERINTELLIGENCE 247