map the site of the march. Hours of research went into
transforming that data into a vision of the mall from five
decades ago, checking the period accuracy of every build-
ing, bus or streetlight set to be digitized. Activists who
participated in the real march were consulted, as were
historians, to help re-create the feeling of being there,
and archived audio recordings from that day fleshed out
the virtual environment.
And then there was the “I Have a Dream” speech.
Generally, to control digital doppelgängers, an actor
dons a motion-capture suit along with a head-mounted
camera pointed at the face. Where hundreds of dots
were once necessary to chart facial movements, today’s
real-time face tracking uses computer vision to map a
person’s face—in this case, that of motivational speaker
Stephon Ferguson, who regularly performs orations
of King’s speeches. The digital re-creation of the civil
rights leader requires of its audience the same thing
Ferguson’s rendition does: a suspension of disbelief and
an understanding that, while you may not be seeing the
person whose words you’re hearing, this is perhaps the
closest you’ll ever get to the feeling of listening to King
speak to you.
Even so, it took seven animators nearly three months
to perfect King’s movements during the segment of his
speech that is included in the experience, working with
character modelers to capture his likeness as well as his
mannerisms, including his facial tics
and saccades— unconscious, involun-
tary eye movements.
“You cannot have a rubbery Dr. King
delivering this speech as though he was
in Call of Duty,” says The March’s lead
producer, Ari Palitz of V.A.L.I.S. “It
needed to look like Dr. King.”
Digital life after death has raised eth-
ical questions before, especially when
figures have been used in ways that
seemed out of keeping with their real
inspirations. King isn’t the first person
to be digitally reanimated, and he won’t
be the last, so these questions will only
become more common, says Jeremy
Bailenson, founder of Stanford’s Vir-
tual Human Interaction Lab. “What to
do with one’s digital footprint over time
has got to be a part of the conversation
about one’s estate,” he says. “It is your
estate; it is your digital legacy.”
So for The March, though some cre-
ative license was taken—the timeline of
the day is compressed, for example —
every gesture King made had to be
based on the truth. Only then would the
result be, in its own way, true.
in los angeles last December, I put
on the headset to see a partially com-
pleted version of the entire experience,
including a one-on-one with the virtual
King, represented as a solitary figure
on the steps of the Lincoln Memorial. I
gazed at his face in motion, and noticed
a mole on his left cheek. It was incon-
spicuous, the black pinpoint accenting
his face. I stepped forward.
When I approached the podium, I was
met with a surprise—Dr. King looking
right at me. His eyes were piercing, his
face a mixture of confidence, austerity
and half a million polygons optimized
for viewing in a VR headset. He appeared
frozen in time, and I found myself
without words. Meeting his gaze was
more challenging than I’d assumed it
would be.
It was then I realized how my view
of him had been, for my whole life, flat-
tened. I’d experienced his presence in
two dimensions, on grainy film or via
big-budget reenactments. How strik-
ing to see him, arms outstretched, voice
booming in my ears, in three dimen-
sions, all in living color. “This is awe-
some,” I eked out. He didn’t hear me. □
IN HIS WORDS
Ferguson matches
his motions to
King’s real delivery
while reciting the
“I Have a Dream”
speech at Digital
Domain
MARCH ON
Find out more at
DUSTIN BATH time.com/the-march
79