Computer Aided Engineering Design

(backadmin) #1
OPTIMIZATION 345

Other bracketing methods include the parabolic interpolation method wherein given three function
points, a parabola is interpolated through them and a candidate minimum is taken at a point on the
parabola where the slope is zero. An efficient Brent’s algorithm that combines parabolic and golden
section search is in much use. Bracketing methods are inherently convergent in that the location of
the root is gauranteed within the interval specified. However, they are capable of yielding only one
root at a time even when there are more than one roots located within the initial interval. Now the
issue is how to determine the initial interval. Ad hoc methods that may be employed are: (a) graphical
whereinf(x) may be plotted and the bracket [xl,xu] may be chosen judiciously about each root by
visual inspection and (b) incremental wherein commencing from a value xl^0 ,the next value xxu^01 = l
is sought incrementally when the function changes sign. Following this, xxu^12 = l is recorded upon
another function sign change, and so on. The graphical approach is non-automated though it intuitively
provides better guesses for the brackets. The incremental method on the other hand is non-intuitive
and depends on the initial value xl^0 and the increment size used.


12.2.2 Open Methods

In contrast to the bracketing methods that employ the bounds to capture a root, open methods employ
only the initial guess for the same, or two starting values that do not necessarily bracket the root.
Some open methods are discussed as follows:


(a) Single fixed-point iteration or method of successive substitution
We may rearrange the function f(x) = 0 as


x = h(x)

which can be converted to an iterative form as


xi+1 = h(xi)

Thus, as the name suggests, starting with x 0 ,x 1 = h(x 0 ) may be computed. The next guess, x 2 may be
evaluated as h(x 1 ), and so on and the procedure continues up to n steps until xn and xn–1 are desirably
close. As trivial as the method seems, it may not converge in every case. Using the Taylor series
expansion up to two terms we have


h(xi+1) = h(xi) + h′(xi) (xi+1 – xi)

or h(xi+1) – h(xi) = h′(xi) (xi+1 – xi)


⇒ |xi+2 – xi+1 | = | h′(xi) | | xi+1 – xi | (12.4)


Thus, for covergence, it is required that | xi+2 – xi+1 | < | xi–1 – xi | which is possible only if | h′ (xi) |
< 1 for all i. Also Eq. (12.4) suggests that the error in each iteration is proportional to that in the
previous iteration, one reason why the method of successive substitution is linearly convergent. The
condition for convergence, that is, | h′ (x) | < 1 may also be verified graphically. In Figure 12.6(a),
convergence is guaranteed from both left and right of the root as | h′(x) | < 1 is true in the neighborhood
of the root, while in Figure 12.6(b), this is not so.
It may be noted that if α = h(α), then
α= (1 – k+k)h(α) = (1 – k)α + kh(α)


for arbitrary k. Visualizing the above in the iterative form, we have


xi+1 = (1 – k)xi + kh(xi) (12.5)
Free download pdf