B.3 Comparison of Constrained Methods 785
A comparison of several variable metric algorithms was made by Shanno and Phua
[B.4]. Sargent and Sebastian presented numerical experiences with unconstrained min-
imization algorithms [B.5]. On the basis of these studies, the following general con-
clusions can be drawn.
If the first and second derivatives of the objective function (f) can be evaluated
easily (either in closed form or by a finite-difference scheme), and if the number of
design variables is not large (n≤50), Newton’s method can be used effectively. For
ngreater than about 50, the storage and inversion of the Hessian matrix at each stage
becomes quite tedious and the variable metric methods might prove to be more useful.
As the problem size increases (beyondn=100 or so), the conjugate gradient method
becomes more powerful.
In many practical problems, the first derivatives offcan be computed more accu-
rately than the second derivatives. In such cases, the BFGS and DFP methods become
an obvious choice of minimization. Of these two, the BFGS method is more stable
and efficient. If the evaluation of the derivatives off is extremely difficult or if the
function does not possess continuous derivatives, Powell’s method can be used to solve
the problem efficiently.
With regard to the one-dimensional minimization required in all the unconstrained
methods, the Newton and cubic interpolation methods are most efficient when the
derivatives offare available. Otherwise, the Fibonacci or the golden section method
has to be used.
B.3 Comparison of Constrained Methods
The comparative evaluation of nonlinear programming techniques was conducted by
several investigators. Colville [B.6] compared the efficiencies of 30 codes using eight
test problems that involve 3 to 16 design variables and 0 to 14 constraints. However,
the codes were tested at different sites on different computers and hence the study was
not considered reliable. Eason and Fenton [B.7] conducted a comparative study of 20
codes using 13 problems that also included the problems used by Colville. However,
their study was confined primarily to penalty function-type methods. Sandgren and
Ragsdell [B.8] studied the relative efficiencies of the leading nonlinear programming
methods of the day more systematically. They studied 24 codes using 35 problems,
including some of those used by Colville and Eason and Fenton.
The number of design variables varied from 2 to 48 and the number of constraints
ranged from 0 to 19; some problems involved equality constraints, too. They found
the GRG method to be most robust and efficient followed by the exterior and interior
penalty function methods.
Schittkowski published the results of his study of nonlinear programming codes in
1980 [B.9]. He experimented with 20 codes on 180 randomly generated test problems
using multiple starting points. Based on his study, the sequential quadratic program-
ming was found to be most efficient, followed by the GRG, method of multipliers,
and penalty function methods, in that order. Similar comparative studies of geomet-
ric programming codes were also conducted [B.10–B.12]. Although the studies above
were quite extensive, the conclusion may not be of much use in practice since the
studies were limited to relatively few methods and further they are limited to specially