12—Tensors 301
for all vectors~uand~v. You should see that using the same symbol, T, for both functions doesn’t
cause any trouble. Given the bilinear functional, what is the explicit form forT(~v)? The answer is
most readily found by a bit of trial and error until you reach the following result:
T(~v) =eˆiT(ˆei, ~v) (12.20)
(Remember the summation convention.) To verify this relation, multiply by an arbitrary vector,~u=
ujeˆj:
~u.T(~v) = (ujˆej).eˆiT(ˆei, ~v)
which is, by the orthonormality of theˆe’s,
ujδjiT(ˆei, ~v) =uiT(eˆi, ~v) =T(~u, ~v)
This says that the above expression is in fact the correct one. Notice also the similarity between this
construction and the one in equation (12.19) forA~.
Now takeT(~v)from Eq. (12.20) and express~vin terms of its components
~v=vjˆej, then T(~v) =ˆeiT(eˆi,vjˆej) =ˆeiT(ˆei,ˆej)vj
Theicomponent of this expression is
T(ˆei,ˆej)vj=Tijvj
a result already obtained in Eq. (12.16).
There’s a curiosity involved here; why should the left hand entry inT(, )be singled out to
construct
ˆeiT(ˆei,~v)?
Why not use the right hand one instead? Answer: No reason at all. It’s easy enough to find out what
happens when you do this. Examine
ˆeiT(~v,eˆi)≡T ̃(~v) (12.21)
Put~v=vjeˆj, and you get
eˆiT(vjeˆj,ˆei) =ˆeiT(eˆj,eˆi)vj
Theithcomponent of which is
Tjivj
If you write this as a square matrix times a column matrix, the only difference between this result
and that of Eq. (12.16) is that the matrix is transposed. This vector valued functionT ̃is called the
transpose of the tensorT. The nomenclature comes from the fact that in the matrix representation,
the matrix of one equals the transpose of the other’s matrix.
By an extension of the language, this applies to the other form of the tensor,T:
T ̃(~u, ~v) =T(~v, ~u) (12.22)
Symmetries
Two of the common and important classifications of matrices, symmetric and antisymmetric, have their
reflections in tensors. A symmetric tensor is one that equals its transpose and an antisymmetric tensor