Introduction to Face Detection and Face Recognition
“Face Recognition” is a very active area in the Computer Vision and Biometrics fields, as it has been studied vigorously for 25 years and is finally producing applications in security, robotics, human-computer-interfaces, digital cameras, games and entertainment.
“Face Recognition” generally involves two stages:
- Face Detection, where a photo is searched to find any face (shown here as a green rectangle), then image processing cleans up the facial image for easier recognition.
- Face Recognition, where that detected and processed face is compared to a database of known faces, to decide who that person is (shown here as red text).
Since 2002, Face Detection can be performed fairly reliably such as with OpenCV’s Face Detector, working in roughly 90-95% of clear photos of a person looking forward at the camera. It is usually harder to detect a person’s face when they are viewed from the side or at an angle, and sometimes this requires 3D Head Pose Estimation. It can also be very difficult to detect a person’s face if the photo is not very bright, or if part of the face is brighter than another or has shadows or is blurry or wearing glasses, etc.
However, Face Recognition is much less reliable than Face Detection, generally 30-70% accurate. Face Recognition has been a strong field of research since the 1990s, but is still far from reliable, and more techniques are being invented each year.
Eigenfaces (also called “Principal Component Analysis” or PCA) is a simple and popular method of 2D Face Recognition from a photo, as opposed to other common methods such as Neural Networks or Fisher Faces.
How to preprocess facial images for Face Recognition
If you tried to simply perform face recognition directly on a normal photo image, you will probably get less than 10% accuracy!
It is extremely important to apply various image pre-processing techniques to standardize the images that you supply to a face recognition system. Most face recognition algorithms are extremely sensitive to lighting conditions, so that if it was trained to recognize a person when they are in a dark room, it probably wont recognize them in a bright room, etc. This problem is referred to as “lumination dependent”, and there are also many other issues, such as the face should also be in a very consistent position within the images (such as the eyes being in the same pixel coordinates), consistent size, rotation angle, hair and makeup, emotion (smiling, angry, etc), position of lights (to the left or above, etc). This is why it is so important to use a good image preprocessing filters before applying face recognition. You should also do things like removing the pixels around the face that aren’t used, such as with an elliptical mask to only show the inner face region, not the hair and image background, since they change more than the face does.
For simplicity, the face recognition system to use is Eigenfaces using greyscale images. You can easily convert color images to greyscale (also called ‘grayscale’), and then easily apply Histogram Equalization as a very simple method of automatically standardizing the brightness and contrast of your facial images. For better results, you could use color face recognition (ideally with color histogram fitting in HSV or another color space instead of RGB), or apply more processing stages such as edge enhancement, contour detection, motion detection, etc. Also, this code is resizing images to a standard size, but this might change the aspect ratio of the face.