Computer Aided Engineering Design

(backadmin) #1

364 COMPUTER AIDED ENGINEERING DESIGN


The method commences with an initial guess X 0 which may or may not be feasible. The objective
and constraints are linearized as in Eqs. (12.35) and then the linear programming problem is stated
as


Minimize: f(Xi) + ∇f (Xi)T(X – Xi)
Subject to: gj(Xi) + ∇gj(Xi)T(X – Xi)≤ 0, j= 1,... , m
hk(Xi) + ∇hk(Xi)T(X – Xi) = 0, k = 1,... , p

which is solved using the Simplex method to obtain the new vector Xi+1. The original constraints are
evaluated at Xi+1, that is, gj(Xi+1),j = 1,... , m and hk(Xi+1),k = 1,... , pare determined and if
gj(Xi+1)≤ε for all j and |hk (Xi+1)| ≤ε for all k for some prespecified positive tolerance value ε, the
constraints are assumed to be satisfied and the procedure is stopped with X* = Xi+1.
If some constraints are violated, i.e. gj(Xi+1) > ε for some jor |hk(Xi+1)| > ε for some k, the most
violated constraint is determined, for instance


gl(Xi+1) = max(gj(Xi+1))

which is linearized about Xi+1 as


gl(Xi+1) + ∇gj(Xi+1)T(X – Xi+1)≤ 0

and included as the (m + 1)th inequality constraint in the previous linear programming (LP) problem.
The new iteration number is set to i + 1 and the new LP problem is solved with m+ 1 inequality
constraints and pequality constraints.


12.6 Sequential Quadratic Programming (SQP)


The sequential quadratic programming is a general-purpose mathematical programming technique
that involves solving the quadratic programming sub-problem of the type


Minimize: φΔ = + (ffkk∇∇∇∇∇ ∇ + Tk hXkk) + 21 XTk(^22 fk + Tk hXkk)

∇hkΔXk + hk = 0 (12.36)
Subject to: Xl≤X≤Xu

wherefk is the objective value in the kth iteration, ∇fk and ∇hk are the gradients of the function and
active constraints, ∇^2 fk and ∇^2 hk are the second order derivatives of the function and active constraints
andk are the Lagrange multipliers for active constraints. Here, active constraints refer to both
equality constraints and tight inequality constraints inclusive. ∇^2 fk + Tk∇^2 hk is the Hessian of the
Lagrangian,φ with respect to the design variables X and is usually updated using DEP of BFGS
methods to ensure its positive definiteness and thus proper local convergence behavior. The reader is
referred to texts on optimization for details on such update methods. The algorithm is executed in the
following steps:


(a) An initial feasible variable vector and an active constraint set including equality constraints is
first chosen. At this time, the Hessian of the function, ∇^2 fk is approximated as an identity matrix
and Lagrange multipliers are initialized to zero.
(b) A feasible search vector, ΔXk and Lagrange multipliers, k are computed using the first order
KKT necessary conditions.
(c) A step-length, αk along the direction of ΔXk is determined in the interval [0, 1] that minimizes

Free download pdf