Discrete Mathematics for Computer Science

(Romina) #1

Analysis of Algorithms


We have frequently used the word algorithm to mean the idea, or plan of attack, behind
a computer program. More precisely, an algorithm can be thought of as a sequence of
steps, in which each step is unambiguously defined and can be executed mechanically in
some finite amount of time. An algorithm may have some input data, and it must have
some output. Many definitions used in explaining algorithms also require that an algorithm
halts (or stops or terminates) after some finite number of steps. We will not require this.
Therefore, we can say that the principle behind a computer program is an algorithm even
when the computer program goes into an infinite loop on some inputs and does not output
anything. However, when we say that A is an algorithm to solve a particular problem, we

will mean that A halts on all inputs-or at least on all input data that make sense for the

problem. A more detailed theoretical study of algorithms requires a more formal definition
such as the notion of a Turing machine, which is beyond the scope of our needs here.
Although experimental timing of programs is obviously important, it is quite inade-
quate for dealing with questions such as these. In this chapter, we introduce a mathematical
study, called computational complexity, that deals with these issues. This study is highly
important wherever complicated computer programs are needed, including areas such as
the design of layouts for computer chips and numerical analysis.
First, we study a general mathematical measure of (approximately) how fast functions
grow. For this, we will introduce some very standard vocabulary: asymptotic domination,
O(), and 0 (). The motivation for looking at how fast a function grows is as follows:
Define the amount of time taken by an algorithm to be a function of the input data. Say
that (i) A 1 and A 2 are two algorithms to solve the same problem, (ii) they are encoded
as programs P 1 and P2, and (iii) for input data D, TI (D) and T 2 (D) are the amounts of
time taken by P 1 and P 2 on data set D. Now, let the number of elements in D increase: If
T 1 (D) grows much faster than T2 (D) as D increases in size, then, except for small inputs
D, algorithm A 2 is more efficient than algorithm A 1 .(The word small here is relative, as
we will see.)
Next, we will use these ideas of growth rates in analyzing algorithms. From looking
at the algorithm alone, one can deduce a great deal of information about the time functions
above. This also means that as you write an algorithm, you can, before you ever start
coding it, make some educated choices about better ways to organize it. We will also briefly
indicate some variants of this notion regarding computational complexity to show other
measures for algorithms that are of interest to computer science.


283
Free download pdf