Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
556 Inferences About Normal Linear Models

quadratic forminX. Due to the symmetry ofA, there are several ways we can
writeQ:


Q=X′AX =

∑n

i=1

∑n

j=1

aijXiXj=

∑n

i=1

aiiXi^2 +

∑∑

i =j

aijXiXj (9.8.4)

=

∑n

i=1

aiiXi^2 +2

∑∑

i<j

aijXiXj.

(9.8.5)

These are very useful random variables in analysis of variance models. As the
following theorem shows, the mean of a quadratic form is easily obtained.

Theorem 9.8.1.Suppose then-dimensional random vectorXhas meanμand
variance–covariance matrixΣ.LetQ=X′AX,whereAis a realn×nsymmetric
matrix. Then
E(Q)=trAΣ+μ′Aμ. (9.8.6)


Proof:Using the trace operator and property (9.8.3), we have

E(Q)=E(trX′AX)=E(trAXX′)
=trAE(XX′)
=trA(Σ+μμ′)
=trAΣ+μ′Aμ,

where the third line follows from Theorem 2.6.3.

Example 9.8.1(Sample Variance). LetX′=(X 1 ,...,Xn)beann-dimensional
vector of random variables. Let 1 ′=(1,...,1) be then-dimensional vector whose
components are 1. LetIbe then×nidentity matrix. Consider the quadratic form
Q=X′(I−n^1 J)X,whereJ= 11 ′; i.e.,Jis ann×nmatrix with all entries equal
to 1. Note that the off-diagonal entries of (I−^1 n,J)are−n−^1 while the diagonal
entries are 1−n−^1 ; hence, by (9.8.4),Qsimplifies to


Q =

∑n

i=1

Xi^2

(
1 −

1
n

)
+

∑∑

i =j

(

1
n

)
XiXj

=

∑n

i=1

Xi^2

(
1 −

1
n

)

1
n

∑n

i=1

Xi

∑n

j=1

Xj+

1
n

∑n

i=1

X^2 i

=

∑n

i=1

Xi^2 −nX
2
=(n−1)S^2 , (9.8.7)

whereXandS^2 denote the sample mean and variance ofX 1 ,...,Xn.
Suppose we further assume thatX 1 ,...,Xnare iid random variables with com-
mon meanμand varianceσ^2. Using Theorem 9.8.1, we can obtain yet another
Free download pdf