Silicon Chip – April 2019

(Ben Green) #1

siliconchip.com.au Australia’s electronics magazine April 2019 17


and lip thickness to achieve greater recognition accuracy.
But the facial features still had to be manually entered into
the computer.
In 1987, mathematicians L. Sirovich and M. Kirby de-
veloped an approach to efficiently represent facial images
using principal component analysis (PCA).
This was used as the basis of facial recognition by com-
puter scientists Matthew Turk and Alex Pentland in 1991.
PCA is a statistical technique whereby a large number
of possibly correlated variables are reduced to a smaller
number of non-correlated variables.
While the resulting set of variables is significantly smaller
than the starting set, it still contains most of the same infor-
mation. In other words, it is a method of “lossy” data com-
pression, or dimensionality reduction as it is also known.
The first principal component accounts for most of the


variability in the data set, the second accounts for most of
the remaining variability and so on. Principal component
analysis as applied to human faces results in a set of im-
ages known as eigenfaces (Fig.3).
In practice, relatively few principal components can
account for most of the variability of human faces (Fig.4).
This technique dramatically simplifies data processing as
much less data needs to be stored and compared.
Sirovich and Kirby determined that a large collection
of facial images could be simply represented by a small
set of “standard” faces (eigenfaces) to which are applied
weighting factors to approximately represent all members
of the collection.
Eigenfaces might also be thought of as “standardised
face ingredients” and any human face can be considered
a combination of various proportions of these standard
faces, eg, an individual might comprise 10% of eigenface
#1, 16% of eigenface #2 etc.
Relatively few eigenfaces are needed to represent all
human faces, as long as the appropriate mix of each is ap-
plied. For example, combinations of 43 eigenfaces can be
used to represent 95% of all human faces.
Turk and Pentland essentially applied the inverse of
Sirovich’s and Kirby’s work (a way to represent known
faces) to identify unknown faces. Their technique took
unknown faces and determined what weighting factors
needed to be applied to generate the features of a known
individual in a database (eigendecomposition).
The closer the weighting factors were between the known

Fig.4: reconstructing a photo of one person by combining
Eigenfaces computed from the AT&T Database of Faces
using the OpenCV software.


Fig.5: a selection of Fisherfaces from Yale Face Database A,
computed by OpenCV.


Fig.6: elastic bunch graph mapping showing a subject in
three different poses.

Fig.7: a faceprint of a test subject for Aurora 3D facial
recognition software.
Free download pdf