Mathematical Tools for Physics

(coco) #1
7—Operators and Matrices 196

For the example Eq. ( 29 ) the eigenvectors are given in Eq. ( 30 ). In components these are

~e′ 1 →

(


1


− 1


)


, and ~e′ 2 →

(


1


1


)


, implying S=

(


1 1


−1 1


)


The inverse to this matrix is


S−^1 =

1


2


(


1 − 1


1 1


)


You should verify thatS−^1 MSis diagonal.


7.10 Summation Convention
In all the manipulation of components of vectors and components of operators you have to do a lot of sums.
There are so many sums over indices that a convention* was invented (by Einstein) to simplify the notation.


A repeated index in a term is summed.
Eq. ( 6 ) becomesf(~ei) =fki~ek.
Eq. ( 8 ) becomesuk=fkivi.
Eq. ( 21 ) becomeshki=fkjgji.
IM=MbecomesδijMjk=Mik.
What if there arethreeidentical indices in the same term? Then you made a mistake; that can’t happen.
What about Eq. ( 27 )? That has three indices. Yes, and there I explicitly said that there is no sum. This sort of
rare case you have to handle as an exception.


7.11 Can you Diagonalize a Matrix?
At the beginning of section7.8I said that the basis in which the components of an operator form a diagonal
matrix “almost always exists.” There’s a technical sense in which this is precisely true (except on a set of measure
zero), but that’s not what you need to know in order to manipulate matrices; the theorem that you need to have
is that every matrix is the limit of a sequence of diagonalizable matrices. If you encounter a matrix that cannot
be diagonalized, then you can approximate it as closely as you want by a matrix that can be diagonalized, do your
calculations, and finally take a limit. You already did this if you did problem4.11, but in that chapter it didn’t
look anything like a problem involving matrices, much less diagonalization of matrices. Yet it is the same.


* There is a modification of this convention that appears in chapter 12, section12.4
Free download pdf