Algorithms in a Nutshell

(Tina Meador) #1

(^38) | Chapter 2: The Mathematics of Algorithms
Why do the points in Figure 2-9 not appear on a straight line? For what value ofx
does the line break? The multiplication operation () appears to be overloaded. It
does different things depending upon whether the numbers being multiplied are
floating-point numbers, or integers that each fit into a single word of the machine,
or integers that are so large that they must each be stored in several words of the
machine, or some combination of these.
The first break in the plot occurs forx={30,31}, which cannot easily be
explained. The remaining plateaus offer more conventional explanations, since
they occur at the values (32, 64, 96, 128), which represent the size of the word on
the computer on which the trials were executed (namely, one, two, three, or four
32-bit words). As the numbers require more and more words of space to be
stored, the time needed to perform multiplication also increases.
A benchmark operation is essential to an algorithm, such that counting execu-
tions of the benchmark operation offers a good prediction of the execution time of
a program. The benchmark operation ofTwoToTheN is
, the multiplication
operation.
One Final Point
We have simplified the presentation of the “Big O” notation in this book. For
example, when discussing the behavior of the ADDITIONalgorithm that islinear
with respect to its input sizen, we argued that there exists some constantc> 0
such thatt(n)≤c∗nfor alln>n 0 ; recall thatt(n) represents the actual running time
of ADDITION. By this reasoning, we claim the performance of ADDITIONis O(n).
The careful reader will note that we could just as easily have used a function
f(n)=c2nthat grows more rapidly thancn. Indeed, although it is technically accu-
rate to claim that ADDITIONis O(2n), such a statement provides very little
information (it would be like saying that you need no more than one week to
perform a five-minute task). To explain why, consider the notationΩ(g(n)), which
declares thatg(n)≤t(n) is a lower bound for the actual running time. One can often
compute both the upper (O) and lower (Ω) bound for the execution time of an
algorithm, in which case the appropriate notation to use isΘ(f(n)), which asserts
thatf(n) is asymptotically both an upper bound (as is the case for O(f(n)) and a
lower bound fort(n).
We chose the more informal (and widely accepted use) of O(f(n)) to simplify the
presentations and analyses. We ensure that when discussing algorithmic behavior,
there is no betterf’(n) that can be used to classify the algorithms we have identi-
fied as O(f(n)).
References
Bentley, Jon Louis and M. Douglas McIlroy, “Engineering a Sort Function,”
Software—Practice and Experience, 23(11): 1249–1265, 1993,http://citeseer.ist.psu.
edu/bentley93engineering.html.
Zuras, D. “More on Squaring and Multiplying Large Integers,”IEEE Transactions
on Computers, 43(8): 899–908, 1994,http://doi.ieeecomputersociety.org/10.1109/
12.295852.
Algorithms in a Nutshell
Algorithms in a Nutshell By Gary Pollice, George T. Heineman, Stanley Selkow ISBN:
9780596516246 Publisher: O'Reilly Media, Inc.
Prepared for Ming Yi, Safari ID: [email protected]
Licensed by Ming Yi
Print Publication Date: 2008/10/21 User number: 594243
© 2009 Safari Books Online, LLC. This PDF is made available for personal use only during the relevant subscription term, subject to the Safari Terms of Service. Any other use

Free download pdf