PC_Powerplay-Iss_275_2019

(sharon) #1

offering more features (collision detection, cloth
simulation, etc) combined with more
efficient processing.
Around 2004, a Swiss company called
NovodeX built an engine that attracted the eye
of semiconducter manufacturer Aegia. NovodeX
had come up with a way of using a dedicated
chip to process physics calculations, and in 2006,
Aegia decided the world was ready for a product
built around that chip.
They called it a Physics Processing Unit, or
PPU, and as the name suggests it would offload
the whole of the physics processing from the
CPU. It was the first time the idea of doing this in


hardware had even been seriously discussed.
Aegia partnered with ASUS, BFG, and ELSA
(remember those last two?) to build and
distribute PhysX boards. ASUS sent PCPP one for
review when they first arrived. We duly plugged it
in and the results were... imperceptible?
That’s because having a PPU only matters
to games that have been developed to take
advantage of the relevant physics SDK. Put it this
way: NVIDIA has a hard row to plough getting us
to buy RTX cards before very many ray-tracing
enabled games are even out yet, sure. But at
least an RTX 2070 still works as a normal
video card!

The physical PhysX card did nothing to
actually improve the performance of games
that didn’t also use the PhysX engine. Worse,
because of the way the early version of PhysX
was written, it didn’t do that much to improve
games that did use the engine, either.
PhysX cards sat on shelves, ignored by
gamers. It looked like physics acceleration was
about to die, almost in the same year it was
born, but then Aegia, PhysX, and the future of
hardware-accelerated physics was saved by
possibly the last people you’d expect - NVIDIA.

CORE ENABLERS
Aegia was confident in its development of the
PPU because the two big GPU companies -
NVIDIA and ATI at the time - hadn’t announced
anything about hardware physics support. It
was a niche ripe for exploitation. Aegia knew
that hardware physics was going to be huge,
and they were going to own it. They were
half-right.
Because what Aegia presumably didn’t
know was that both NVIDIA and ATI were
deep into the process of creating a whole new
kind of thing - the General Purpose GPU.
The GPGPU concept allowed GPUs to
do things other than merely sling graphics.
Instead of focusing on adding support for
certain visual effects, the focus shifted to
doing certain kinds of data processing... by
treating data as if were graphics.
This shift in thinking is a bit part of what led
to a renaissance in supercomputing - and the
creation of hugely powerful machines made
up of thousands of individual cards that were,
essentially, just slightly fancier GPUs.
The GPGPU idea also made it possible to
run physics calculations, in hardware, on the
video card. So just like that, the entire concept
of the PPU was made obsolete. You didn’t
need one then, and the future of GPGPUs
meant you would never need one, because
your video card would be able to the PPU’s job.
Counter-intuitively, this was a stroke of luck
for Aegia. Rather than having to stress about
near-zero sales figures and suffer through
ignominy of bankruptcy, in 2008 they were
instead bought up by NVIDIA.
The PhysX card was axed of course, but
the PhysX engine lives on. NVIDIA’s GPUs can
run all the physics calculations in hardware -
indeed, if you’re an NVIDIA user today, you’ve
probably seen the PhysX drivers listed when
you do your semi-regular system check for
dodgy spyware.
Flip ahead to 2019, and hardware physics is
just another part of making a game. There are
a bunch of different engines and APIs, some of
which only do a specific thing (like cloth). The
PhysX SDK itself is up to version 4.0 and - in
keeping with NVIDIA’s general push toward
free software running on proprietary hardware


  • it’s open source.
    ANTHONY FORDHAM


It was Valve, with Half Life 2’s gravity gun, that
really showed what a physics engine could do.

Yeah, the game wasn’t much, but its
physics were groundbreaking.
Free download pdf