Discrete Mathematics for Computer Science

(Romina) #1

284 CHAPTER 5 Analysis of Algorithms


Finally, we briefly introduce another topic: Are there any functions that are, in princi-
ple, not computable by any computer program--on any computer, no matter how big, and
no matter how long we wait for a program to finish? We will prove the fundamental result
of Turing's about the "unsolvability of the halting problem."

M Comparing Growth Rates of Functions


There are three ideas to keep in mind as we develop a formal setting for determining the
complexity of an algorithm:


  1. Since you can time the program on only a finite (usually small) number of input data
    sets, how can you (reasonably) estimate the time on other input sets? For example, can
    you (reasonably) extrapolate from the time you have taken to run your program on test
    data sets to the expected time on real (and, probably, much larger) data sets?

  2. Computer A may do one operation (say, swapping two integers) twice as fast as com-
    puter B. Computer B may do some other operation (say, comparing two real numbers)
    twice as fast as computer A. Can you (reasonably) compare the speeds of two algo-
    rithms without knowing on which machine each program will be run?


3. Can you design your algorithm so that your program will "scale well"-in other words,

so that as the size of the data set increases, the time needed by the program will not
increase too rapidly? You may have seen algorithms to search for a number in a sorted
array (a special kind of list). Two standard methods for this purpose are called lin-
ear search and binary search. When these two algorithms are presented, people justify
BinarySearch as being obviously faster. How can they (reasonably) say this?

We would like to approximate how the time taken by a program increases as the size
of the input data set grows. Moreover, we would like to make an approximation that we
can compute by looking at a (detailed) algorithm alone, so that the program designer can
make reasonable choices about which algorithms to use without having to code an actual
algorithm.
Before we are ready to approximate the growth rate of algorithms, however, we need
to understand how the growth rate of functions can be compared. Once we understand
how functions can be compared, we will show how a function can be associated with the
running time of an algorithm. We then compare algorithms by comparing the functions
associated with describing their running times.

5.1.1 A Measure for Comparing Growth Rates

There has been much discussion among computer scientists about how to measure the
growth rate of algorithms. In this area, two major simplifications are useful:


  1. If we have two algorithms, A 1 and A 2 , where A t always runs faster on "small" inputs


and A 2 always runs faster on "large" inputs, then we-at least under this measure-

prefer A 2 : It is "almost always" faster. This measure ignores differences between the
two functions on "small" inputs, and here, "small" is taken to mean "smaller than any
fixed finite size." (This is a huge issue to ignore, but the approximation turns out to be
highly useful.)
Free download pdf