252 Nonlinear Programming I: One-Dimensional Minimization Methods
x 2
Figure 5.3 Iterative process of optimization.
The iterative procedure indicated by Eq. (5.1) is valid for unconstrained as well as
constrained optimization problems. The procedure is represented graphically for a hypo-
thetical two-variable problem in Fig. 5.3. Equation (5.1) indicates that the efficiency
of an optimization method depends on the efficiency with which the quantitiesλ∗i and
Si are determined. The methods of finding the step lengthλ∗i are considered in this
chapter and the methods of findingSiare considered in Chapters 6 and 7.
Iff(X) is the objective function to be minimized, the problem of determiningλ∗i
reduces to finding the valueλi=λ∗ithat minimizesf(Xi+ 1 )=f(Xi+λiSi)=f(λi)
for fixed values ofXiandSi. Since f becomes a function of one variableλionly, the
methods of findingλ∗i in Eq. (5.1) are calledone-dimensional minimization methods.
Several methods are available for solving a one-dimensional minimization problem.
These can be classified as shown in Table 5.1.
We saw in Chapter 2 that the differential calculus method of optimization is an
analytical approach and is applicable to continuous, twice-differentiable functions. In
this method, calculation of the numerical value of the objective function is virtually the
last step of the process. The optimal value of the objective function is calculated after
determining the optimal values of the decision variables. In the numerical methods
of optimization, an opposite procedure is followed in that the values of the objective
function are first found at various combinations of the decision variables and conclu-
sions are then drawn regarding the optimal solution. The elimination methods can be
used for the minimization of even discontinuous functions. The quadratic and cubic