This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002).
The task of facial recogniton is discriminating input signals (image data) into several classes
(persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting
conditions, pose etc.), yet the input images are not completely random and in spite of their
differences there are patterns which occur in any input signal. Such patterns, which can be
observed in all signals could be - in the domain of facial recognition - the presence of some
objects (eyes, nose, mouth) in any face as well as relative distances between these
objects. These characteristic features are called eigenfaces in the facial recognition
domain (or principal components generally). They can be extracted out of original
image data by means of a mathematical tool called Principal Component Analysis
(PCA).
By means of PCA one can transform each original image of the training set into a
corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct
any original image from the training set by combining the eigenfaces. Remember that
eigenfaces are nothing less than characteristic features of the faces. Therefore one could say
that the original face image can be reconstructed from eigenfaces if one adds up all the
eigenfaces (features) in the right proportion. Each eigenface represents only certain features
of the face, which may or may not be present in the original image. If the feature is present in
the original image to a higher degree, the share of the corresponding eigenface in the
”sum” of the eigenfaces should be greater. If, contrary, the particular feature is not
(or almost not) present in the original image, then the corresponding eigenface
should contribute a smaller (or not at all) part to the sum of eigenfaces. So, in order
to reconstruct the original image from the eigenfaces, one has to build a kind of
weighted sum of all eigenfaces. That is, the reconstructed original image is equal to a
sum of all eigenfaces, with each eigenface having a certain weight. This weight
specifies, to what degree the specific feature (eigenface) is present in the original
image.
If one uses all the eigenfaces extracted from original images, one can reconstruct the
original images from the eigenfaces exactly. But one can also use only a part of
the eigenfaces. Then the reconstructed image is an approximation of the original
image. However, one can ensure that losses due to omitting some of the eigenfaces
can be minimized. This happens by choosing only the most important features
(eigenfaces). Omission of eigenfaces is necessary due to scarcity of computational
resources.
How does this relate to facial recognition? The clue is that it is possible not only to extract the
face from eigenfaces given a set of weights, but also to go the opposite way. This opposite
way would be to extract the weights from eigenfaces and the face to be recognized. These
weights tell nothing less, as the amount by which the face in question differs from ”typical”
faces represented by the eigenfaces. Therefore, using this weights one can determine two
important things:
An eigenvector of a matrix is a vector such that, if multiplied with the matrix, the result is
always an integer multiple of that vector. This integer value is the corresponding
eigenvalue of the eigenvector. This relationship can be described by the equation
M × u = × u, where u is an eigenvector of the matrix M and is the corresponding
eigenvalue.
Eigenvectors possess following properties:
In this section, the original scheme for determination of the eigenfaces using PCA will be presented. The algorithm described in scope of this paper is a variation of the one outlined here. A detailed (and more theoretical) description of PCA can be found in (Pissarenko, 2002, pp. 70-72).
= n | (1) | |
i = i - | (2) |
| (3) |
There is a problem with the algorithm described in section 5. The covariance matrix C in step
3 (see equation 3) has a dimensionality of N2 × N2, so one would have N2 eigenfaces and
eigenvalues. For a 256 × 256 image that means that one must compute a 65,536 × 65,536
matrix and calculate 65,536 eigenfaces. Computationally, this is not very efficient as most of
those eigenfaces are not useful for our task.
So, the step 3 and 4 is replaced by the scheme proposed by Turk and Pentland (1991a):
C = nnT = AAT | (4) | |
L = ATA L n,m = mT n | (5) | |
ul = v lkk l = 1,...,M | (6) |
The process of classification of a new (unknown) face new to one of the classes (known
faces) proceeds in two steps.
First, the new image is transformed into its eigenface components. The resulting weights
form the weight vector newT
k = ukT(new - ) k = 1...M' | (7) | |
newT = | (8) |
Let1 an arbitrary instance x be described by the feature vector
| (9) |
| (10) |
I | Face image |
N × N | Size of I |
Training set | |
i | Face image i of the training set |
new | New (unknown) image |
Average face | |
M = || | Number of eigenfaces |
M' | Number of eigenfaces used for face recognition |
C | Covariance matrix |
XT | Transposed X (if X is a matrix) |
u | Eigenvector (eigenface) |
Eigenvalue | |
i | Weight i |
iT | Weight vector of the image i |
Threshold value | |
T. M. Mitchell. Machine Learning. McGraw-Hill International Editions, 1997.
D. Pissarenko. Neural networks for financial time series prediction: Overview over recent research. BSc thesis, 2002.
L. I. Smith. A tutorial on principal components analysis, February 2002. URL http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf. (URL accessed on November 27, 2002).
M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3 (1), 1991a. URL http://www.cs.ucsb.edu/ mturk/Papers/jcn.pdf. (URL accessed on November 27, 2002).
M. A. Turk and A. P. Pentland. Face recognition using eigenfaces. In Proc. of Computer Vision and Pattern Recognition, pages 586-591. IEEE, June 1991b. URL http://www.cs.wisc.edu/ dyer/cs540/handouts/mturk-CVPR91.pdf. (URL accessed on November 27, 2002).