Computational Physics

(Rick Simeone) #1

562 Appendix A


y

x

0
1

2

2 ′

Figure A.2. Finding the minimum of a quadratic functionfdepending on two
variables. The ellipses shown are the contour lines of the functionf. The solid
lines represent the steps followed in the steepest decsent method. In the conjugate
gradient method, the minimum is found in two steps, the second step being the
dotted line.

Next we turn to the minimisation of a function depending on more than one
variable. We assume that the gradient of the function is known. To get a feeling for
this problem and the solution methods, imagine that you try to reach the nearest
lowest point in a mountainous landscape. We start off at some pointx 0 and we
restrict ourselves to piecewise linear paths. An obvious approach is themethod of
steepest descent. Starting at some pointx 0 , we walk on a straight line throughx 0
along the negative gradientg 0 atx 0 :


g 0 =−∇f(x 0 ). (A.10)

The pointsxon this line can be parametrised as


x=x 0 +λg 0 , λ>0. (A.11)

We minimisefas a function ofλusing a one-dimensional minimisation method as
described at the beginning of this section. At the minimum, which we shall callx 1 ,
we have
d


f(x 0 +λg 0 )=g 0 ·∇f(x 1 )=0. (A.12)

From this point, we proceed along the negative gradientg 1 at the pointx 1 :g 1 =
−∇f(x 1 ). Note that the condition(A.12)implies thatg 0 andg 1 are perpendicular:
at each step we change direction by an angle of 90o.
Although the steepest descent method seems rather efficient, it is not the best
possible minimisation method. To see what can go wrong, consider the situation
shown in Figure A.2, for a function depending on two variables,xandy. We start
off from the pointx 0 along the negative gradient, which inFigure A.2lies along

Free download pdf