won’t persuade a lot of people. It’s easier to fool people than to
convince them they’ve been fooled.
We tend to think of social media as neutral—they’re just serving us
stuff. We are autonomous, thinking individuals and can discern truth
from falsehood. We can choose what to believe or not. We can choose
how to interact. But research shows that what we click is driven by
deeply subconscious processes. Physiologist Benjamin Libet used EEG
to show that activity in the brain’s motor cortex can be detected 300
milliseconds before a person feels they have decided to move.^38 We
click on impulse rather than forethought. We are driven by deep
subconscious needs for belonging, approval, and safety. Facebook
exploits those needs and gets us to spend more time on the platform
(its core success metric is time on site) by giving us plenty of Likes. It
sends notifications, interrupting your work or your home life with the
urgency that someone has liked your photo. When you share an article
that fits your and your friends’ political views, you do it expecting
Likes. The more passionate the article, the more responses you’ll get.
Tristan Harris, former Google design ethicist and expert in how
technology hijacks our psychological vulnerabilities, compares social
media notifications to slot machines.^39 They both deliver variable
rewards: you’re curious, will I have two Likes or two hundred? You
click the app icon and wait for the wheels to turn—a second, two,
three, piquing your anticipation only makes the reward sweeter: you
have nineteen Likes. Will it be more in an hour? You’ll have to check to
find out. And while you’re there, here are these fake news stories that
bots have been littering the information space with. Feel free to share
them with your friends, even if you haven’t read them—you know
you’ll get your tribe’s approval by sharing more of what they already
believe.
The firm is being careful not to inject humans (gasp!) or any real
judgment into the process. It claims that’s an effort to preserve
impartiality—the same reason it gave when it fired the entire Trends
editorial team. To involve humans would supposedly bring on implicit
and explicit biases. But AI has biases as well. It’s programmed, by
humans, to select the most clickable content. Its priorities are clicks,
numbers, time on site. AI is incapable of distinguishing fake news,
only at best to suspect it, based on origin. Only human fact checkers