6.11 Newton’s Method 347
Therefore,
[J 1 ]−^1 =
1
4
[
+ 2 − 2
−2 4
]
=
[ 1
2 −
1
2
−^121
]
As
g 1 =
{
∂f/∂x 1
∂f/∂x 2
}
X 1
=
{
1 + 4 x 1 + 2 x 2
− 1 + 2 x 1 + 2 x 2
}
( 0 , 0 )
=
{
1
− 1
}
Equation (6.86) gives
X 2 =X 1 −[J 1 ]−^1 g 1 =
{
0
0
}
−
[ 1
2 −
1
2
−^121
]{
1
− 1
}
=
{
− 1
3
2
}
To see whether or notX 2 is the optimum point, we evaluate
g 2 =
{
∂f/∂x 1
∂f/∂x 2
}
X 2
=
{
1 + 4 x 1 + 2 x 2
− 1 + 2 x 1 + 2 x 2
}
(− 1 , 3 / 2 )
=
{
0
0
}
Asg 2 = 0 ,X 2 is the optimum point. Thus the method has converged in one iteration
for this quadratic function.
Iff (X)is a nonquadratic function, Newton’s method may sometimes diverge, and
it may converge to saddle points and relative maxima. This problem can be avoided
by modifying Eq. (6.86) as
Xi+ 1 =Xi+λ∗iSi=Xi−λ∗i[Ji]−^1 ∇fi (6.87)
whereλ∗i is the minimizing step length in the directionSi= −[Ji]−^1 ∇fi. The mod-
ification indicated by Eq. (6.87) has a number of advantages. First, it will find the
minimum in lesser number of steps compared to the original method. Second, it finds
the minimum point in all cases, whereas the original method may not converge in some
cases. Third, it usually avoids convergence to a saddle point or a maximum. With all
these advantages, this method appears to be the most powerful minimization method.
Despite these advantages, the method is not very useful in practice, due to the following
features of the method:
1.It requires the storing of then×nmatrix [Ji].
2 .It becomes very difficult and sometimes impossible to compute the elements of
the matrix [Ji].
3 .It requires the inversion of the matrix [Ji] at each step.
4 .It requires the evaluation of the quantity [Ji]−^1 ∇fiat each step.
These features make the method impractical for problems involving a complicated
objective function with a large number of variables.