Mathematical Methods for Physics and Engineering : A Comprehensive Guide

(Darren Dugan) #1

MATRICES AND VECTOR SPACES


equivalent to saying that if we first multiply a matrix,Bsay, byAand then


multiply by the inverseA−^1 , we end up with the matrix we started with, i.e.


A−^1 AB=B. (8.54)

This justifies our use of the term inverse. It is also clear that the inverse is only


defined for square matrices.


So far we have only defined what we mean by the inverse of a matrix. Actually

finding the inverse of a matrixAmay be carried out in a number of ways. We will


show that one method is to construct first the matrixCcontaining the cofactors


of the elements ofA, as discussed in the last subsection. Then the required inverse


A−^1 can be found by forming the transpose ofCand dividing by the determinant


ofA. Thus the elements of the inverseA−^1 are given by


(A−^1 )ik=

(C)Tik
|A|

=

Cki
|A|

. (8.55)


That this procedure does indeed result in the inverse may be seen by considering

the components ofA−^1 A,i.e.


(A−^1 A)ij=


k

(A−^1 )ik(A)kj=


k

Cki
|A|

Akj=

|A|
|A|

δij. (8.56)

The last equality in (8.56) relies on the property



k

CkiAkj=|A|δij; (8.57)

this can be proved by considering the matrixA′obtained from the original matrix


Awhen theith column ofAis replaced by one of the other columns, say thejth.


ThusA′is a matrix with two identical columns and so has zero determinant.


However, replacing theith column by another does not change the cofactorsCki


of the elements in theith column, which are therefore the same inAandA′.


Recalling the Laplace expansion of a determinant, i.e.


|A|=


k

AkiCki,

we obtain


0=|A′|=


k

A′kiCki′=


k

AkjCki,i=j,

which together with the Laplace expansion itself may be summarised by (8.57).


It is immediately obvious from (8.55) that the inverse of a matrix is not defined

if the matrix is singular (i.e. if|A|=0).

Free download pdf