Maximum PC - USA (2022-03)

(Maropa) #1

COULD


WETWARE WIN?


NEURONS VERSUS


TRANSISTORS: BUT


WHICH IS BETTER?


Is there something special about the human mind that gives
it an inherent advantage over any conventional, binary
computer? In simple numerical terms, brains are still far
superior. Composed of something like 100 billion neurons,
there’s complexity beyond any existing computer chip, even
before you acknowledge that neurons are far more complex
than binary transistors.
It’s thought each neuron can be connected to as many as
10,000 other neurons, the net result of which is up to 1,000
trillion synaptic connections. Try simulating that on a mining
cluster of RTX 3090 GPUs. Of course, the brain’s detailed
functionality isn’t something that can be easily reduced to
numbers. But it’s still remarkable to think about all that
power and connectivity, and still consider that the brain’s
power consumption tops out at about 20 watts. How do you
like them apples, Intel mobile CPU?
While we can debate the true computing power of
biological brains, there’s little doubt we can learn a lot from
them in terms of efficiency. As for other learnings, biology
obviously informs the disciplines of machine learning
and AI. That alone may be enough to create new software
paradigms that accelerate effective computing power in
a way that keeps the implied impact of Moore’s Law—an
exponential growth in computing power—alive.
A parallel pursuit in this field is biological computers, or
computers built from living cells. Yes, it’s been happening
for years. In 2016, researchers from MIT created a machine
based on cells that can perform simple computational
operations, as well as store and recall data. It’s early days
for this area of research and there’s no telling where it will
lead. But it’s not impossible that biological computers could
step in just when more conventional efforts involving silicon
or carbon nanotubes reach their physical limits.

As for the more exotic possibilities, one of the most exciting
involves carbon nanotubes or CNTs for short. Composed of tubes
of one-atom-thick graphene, they’re excellent semiconductors
and have the potential to scale feature densities beyond silicon
semiconductors. One of the most significant advantages of CNT-
based field-effect transistors, or CNFETs, is efficiency. They are
believed to be fully an order of magnitude more efficient than
their silicon-based equivalents.
That’s a huge advantage, given that power consumption,
leakage, and heat are becoming ever-greater problems in
existing chips. It also opens the door to much more complex
3D chips than are possible with silicon semiconductors. While
stacking multiple layers of silicon transistors is technically
possible, doing that within reasonable power and thermal limits
is a different matter altogether.
By way of example, Intel’s latest Alder Lake CPUs have been
measured spiking to over 300 watts. Imagine multiple layers of
transistors with that kind of thirst. However, if each layer only
consumed one-tenth of that power, then you’ve got something
interesting. There’s a snag, of course. CNT semiconductors are
incredibly hard to manufacture. However, in 2019 engineers at
MIT managed to create a 16-bit processor composed of 14,000
CNT transistors capable of executing actual code. The feature
size was relatively large with a gate length of 130nm and a fair
way behind the density of existing silicon semiconductors.
It’s still early days for CNT technology. Only six years ago,
the best anyone had managed was 178 CNFETs in a single chip.
Extrapolate that scaling and it won’t be long before Moore’s
Law’s two-year density doubling cadence looks pedestrian in
comparison. It’s optimistic to think that Moore’s Law isn’t just
alive, but set to accelerate. But the possibility speaks to the fact
that it’s all to play for. You could say there’s Moore to come.

AMD could start shipping Ryzen 7000 series CPUs later this year.

MAR 2022MAXIMU MPC 35


© TSMC

Free download pdf