316 Nonlinear Programming II: Unconstrained Optimization Techniques
The univariate method is very simple and can be implemented easily. However,
it will not converge rapidly to the optimum solution, as it has a tendency to oscil-
late with steadily decreasing progress toward the optimum. Hence it will be better to
stop the computations at some point near to the optimum point rather than trying to
find the precise optimum point. In theory, the univariate method can be applied to find
the minimum of any function that possesses continuous derivatives. However, if the
function has a steep valley, the method may not even converge. For example, consider
the contours of a function of two variables with a valley as shown in Fig. 6.5. If the
univariate search starts at pointP, the function value cannot be decreased either in
the direction±S 1 or in the direction±S 2. Thus the search comes to a halt and one
may be misled to take the pointP, which is certainly not the optimum point, as the
optimum point. This situation arises whenever the value of the probe lengthεneeded
for detecting the proper direction (±S 1 or±S 2 ) happens to be less than the number of
significant figures used in the computations.
Example 6.4 Minimize f (x 1 , x 2 )=x 1 −x 2 + 2 x^21 + 2 x 1 x 2 +x 22 with the starting
point (0, 0).
SOLUTION We will take the probe length (ε) as 0.01 to find the correct direction for
decreasing the function value in step 3. Further, we will use the differential calculus
method to find the optimum step lengthλ∗i along the direction±Siin step 4.
Iterationi= 1
Step 2: Choose the search directionS 1 asS 1 =
{ 1
0
}
.
Figure 6.5 Failure of the univariate method on a steep valley.