DECEMBER 2020 PCWorld 19
it’s a CPU-centric workload and the GPU’s not
running, we can bias the power over to the
CPU, and for a GPU, we can obviously bias
the power over to the GPU.”
We’ve seen this before with Intel’s
competitors, specifically the intriguing
SmartShift technology (go.pcworld.com/shif)
that AMD debuted earlier this year.
Unfortunately, conservative planning by PC
makers (go.pcworld.com/sm21) confined the
technology to a single Dell notebook, the
Dell G5 15 SE (go.pcworld.com/g5se).
Intel’s Deep Link works in a similar manner.
Here, Intel’s software framework is aware that
there’s an Iris Xe Max alongside the Tiger Lake
CPU. It leverages the combined AI (VNNI and
DP4a) and the multiple media encoders on
each piece of silicon to put them all to work at
the same time.
This can play out in several ways. In one
demonstration, for example, Intel used the
Topaz Labs’ Gigapixel upscaler algorithm to
add detail to a 1.4MP image, upscaling it to the
equivalent of 23MP. Intel pitted a 10th-gen Ice
Lake/GeForce MX350 notebook using
TensorFlow versus a Tiger Lake/Iris Xe Max
system and found that its system finished seven
times faster, upscaling multiple photos. (Intel
executives said they didn’t have a GeForce
MX450 system [go.pcworld.com/m450] to
test.) In a second test, Intel compared a Core
i9-10980K system with a RTX2080 GPU with a
Core i7-1165G7 and Iris Xe Max graphics in
transcoding 10 one-minute clips from 4K/
AVC to 1080p/60 HEVC using Handbrake.
Intel finished 1.78 times faster.
INDUSTRY LEADING ENCODING GETS BETTER