Discrete Mathematics for Computer Science

(Romina) #1

312 CHAPTER 5 Analysis of Algorithms


Consider the problem of performing multiplication with arbitrarily large integers. Or-
dinary long multiplication of two 100-digit numbers takes 10,000 single-digit multipli-
cations. Sometimes, it is far more reasonable to count operations on single-digit num-
bers as the key operations. Of course, when operations on single-digit numbers are taken
as key operations, the size of the input is generally measured in terms of the number of
digits.
Some very important issues in modem cryptology hinge on the problem of factoring
numbers into primes. Breaking some modem codes involves factoring very large numbers
(say, 400-digit numbers into primes), and that is presumed to take too long to be feasi-
ble. (If computers get substantially faster, one just increases the number of digits a bit.)
The standard translation of "too long to be feasible" is that there is no polynomial-time
algorithm for which the size of the input is measured as the number of digits and the key
operations are operations on individual digits.

Variant 3: Average Time Complexity
Often, worst-case complexity is not particularly important; what is more important is how
long it takes to run the algorithm on average. (For example, codes that are worst-case
difficult to break but, on average, are very easy to break might not be of much use.) We
will not deal with average time complexity in this book. Average time complexity is harder
to compute than worst-case complexity. Also, problems exist in deciding how to define the
idea. For example, many analyses assume that all data sets of size n are equally likely to
be input, which may be an unrealistic assumption.

Variant 4: Space Complexity
Another measure of the resources that a program requires is how much computer memory
the program takes. In this case, one can study analogues to all the variants discussed previ-
ously. One usually measures the space needed by excluding the space needed to represent
input and output.

Variant 5: Parallel and Distributed Algorithms
Another area of algorithm research concerns parallel and networked computation. A com-
puter may have many separate processors linked together or several computers scattered
across a network with huge network-traffic capacity, and work can be divided among the
various processors so that each execute their part simultaneously. If there are two proces-
sors, then the speed of doing a job can usually be decreased, though by no more than a

factor of approximately two-and usually noticeably less. Some current research centers

use a very large number of processors, as many as the programmers can figure out how to
use for the individual data. So, for sorting 1000 data, the model might allow 1000 separate
processors.
All the calculations presented in this book use a sequential model for computation.
The time needed on machines using parallel models for computation will normally be
substantially less. Analysis of such algorithms, though similar to what we have presented,
must account for simultaneous computations. Currently, there are many architectures for
parallel computers and many ways of measuring complexity on such machines, depending
on such things as whether all processors share a common memory or each has its own
Free download pdf