him, “What is the third?” He smiled and said, “I just hope the computers
decide to be nice to us.”
There is a lot of fear out there that AI will wipe away humanity. Some
suspect this might happen in a dramatic Terminator 2–type conflagration.
Others worry that some machine will kill us off by “accident,” that an AI
designed to innovate better ways to make toothpicks will somehow discover
that harvesting human bodies is the best way.^16 Bill Gates, Stephen Hawking,
and Elon Musk are just a few of the leading thinkers and scientists who have
crapped their pants at how rapidly AI is developing and how underprepared
we are as a species for its repercussions.
But I think this fear is a bit silly. For one, how do you prepare for
something that is vastly more intelligent than you are? It’s like training a dog
to play chess against . . . well, Kasparov. No matter how much the dog thinks
and prepares, it’s not going to matter.
More important, the machines’ understanding of good and evil will likely
surpass our own. As I write this, five different genocides are taking place in
the world.^17 Seven hundred ninety-five million people are starving or
undernourished.^18 By the time you finish this chapter, more than a hundred
people, just in the United States, will be beaten, abused, or killed by a family
member, in their own home.^19
Are there potential dangers with AI? Sure. But morally speaking, we’re
throwing rocks inside a glass house here. What do we know about ethics and
the humane treatment of animals, the environment, and one another? That’s
right: pretty much nothing. When it comes to moral questions, humanity has
historically flunked the test, over and over again. Superintelligent machines
will likely come to understand life and death, creation and destruction, on a
much higher level than we ever could on our own. And the idea that they will
exterminate us for the simple fact that we aren’t as productive as we used to
be, or that sometimes we can be a nuisance, I think, is just projecting the
worst aspects of our own psychology onto something we don’t understand
and never will.
Or, here’s an idea: What if technology advances to such a degree that it
renders individual human consciousness arbitrary? What if consciousness can
be replicated, expanded, and contracted at will? What if removing all these
clunky, inefficient biological prisons we call “bodies,” or all these clunky,
inefficient psychological prisons we call “individual identities,” results in far
more ethical and prosperous outcomes? What if the machines realize we’d be
much happier being freed from our cognitive prisons and having our
perception of our own identities expanded to include all perceivable reality?