MaximumPC 2003 12

(Dariusz) #1

50 MAXIMUMPC DECEMBER 2003


Here’s a look at the real-world, cross-platform apps that we
were able to find for testing. Some of the software is rather
arcane, and some needs updating to take advantage of mod-

ern CPUs and chipsets. Still, all but one app is cross-platform,
so this group offers the best chance of apples-to-apples-to-
apples comparisons. See page 54 for analysis.

And Now Our Benchmarks


MATHEMATICA 5.0
This app can perform all the ridicu-
lously complex math calculations that
intimidate 99.99 percent of the U.S.
population. For benchmarking, we used
Stefan Steinhaus’ newest notebook
script. Among other actions, Steinhaus’
script creates a 1,000 x 1,000 Toeplitz
Matrix, computes 260 Euler numbers of
the polynomial n, and calculates Pi to 3
million digits. All told, the test pushes a
grab bag of CPU- and cache-intensive
floating point and integer tasks. From
what we can gather, the app makes little
to zero use of Hyper-Threading or a sec-
ond CPU. The developer has a G5-opti-
mized version in the works, but it wasn’t
ready at press time. http://www.wolfram.com

SETI@HOME
SETI@Home— the world’s first main-
stream distributed computing proj-
ect—uses spare CPU cycles on home
users’ computers to help search for
extraterrestrial life. For our test, we used
the command-line version of the app on
both platforms. Because each unit the
program analyzes is different, we used
the benchmarking unit developed by
Team Lamb Chop of Arstechnica.com.
To run the test, we simply plugged in
the unit, and noted how long it took to
finish. The SETI@Home coding team
has long been criticized for not adding
optimizations for the P4 or Athlon XP
CPUs, but the team has maintained that
it isn’t interested in speed. Instead, it
wants accuracy, which the current code
provides. For the most part, the app is
floating point-dependent, and seems to
thrive on faster FSB speeds and low-
latency RAM access. http://setiathome
.ssl.berkeley.edu

UNREAL TOURNAMENT 2003
We used the retail versions of the game
for both OS X and Windows XP Pro ,
and patched them with the latest avail-
able updates. The game, which uses

the latest version of the Unreal engine,
comes with a set of predefined bench-
marks that test “fly-by” modes (where
scenes whiz along at dizzying speeds),
as well as botmatches. Unfortunately,
while comparing Mac test runs with
PC runs, we discovered that the bench-
marks don’t always run the same
content—save for one specific test, the
“asbestos” fly-by sequence. Thus we
ran this sequence at 800x600 with the
audio disabled on all three machines,
and recorded average frames per sec-
ond. Running the test at a low res of
800x600 helps reduce any bottleneck
imposed by the video subsystem, and
puts a greater onus of performance on a
system’s CPU, RAM, and mobo chipset.
http://www.unreal.com

JEDI OUTCAST
This first-person shooter runs a heavily
modified Quake III engine. To generate
an average frames-per-second score,
we use our own custom demo run-
ning at 640x480 with audio disabled.
We’ve always found that this test is
almost entirely CPU-reliant. In fact, it’s
mostly immune to videocard hardware
upgrades unless you enable antialias-
ing and extreme visual quality settings.
The game itself is OpenGL, and has
historically favored nVidia graphics.
http://www.lucasarts.com

QUAKE III
This game first came out when P-III 500s
and TNT2 cards were considered swift,
but it’s still regarded as an excellent CPU
benchmark. We ran the patched 1.32 ver-
sion, and used the built-in Four demo
at 640x480 with visual quality set to
normal and the audio engine disabled.
Again, the low res helps expose CPU
performance. As one CPU vendor told
us, “Carmack is a human compiler.” In
other words, his engine code is so tight,
the game always runs faster when you
throw more silicon at it. Increases in

CPU speed, cache size, and bus speed
all pay tangible dividends. The game is
also a voracious system-bandwidth hog,
and, on the PC side, traditionally favored
the P4 until the Athlon 64 FX arrived.
http://www.idsoftware.com

INDESIGN 2.0
Adobe’s page layout program is taking
the publishing industry by storm. For
our tests, we took an actual 13-page
feature created in InDesign for our
November issue, and converted it into
an Acrobat PDF file. This is how our art
department prepares files for press, so
it doesn’t get any more real-world than
this! The PDFing process taps all com-
puter subsystems. http://www.adobe.com

PHOTOSHOP 7.0.1
We used two different action scripts to
test performance with the world’s most
respected image editing app. The first
script was created by MacAddict , and
uses several very popular filters and
operations, such as Gaussian blur, dust
and scratches, unsharp mask, and a file
conversion from RGB to CMYK. It must
be noted that while all of the script’s fil-
ters and operations are popular among
creative professionals, a good many of
them take advantage of the G5’s unique
Altivec instructions and multithreaded
CPU architecture. For this very reason,
we created our own second script that
uses just about every filter in the stock
Photoshop 7.0 install (three filters we
would’ve liked to have included forced
our script to stop). By employing every
filter, we erase the possibility of plat-
form favoritism entirely, though we con-
cede that some filters simply aren’t used
very often (or at least shouldn’t be used
very often). For the MacAddict script, we
used a 50MB PSD file, and for our script,
we used a 2MB JPEG. For all tests, we
set Photoshop to use the maximum
amount of RAM. Photoshop stresses
the entire computing pipeline from
(Continued on page 52)
Free download pdf