Unified Representation of Multimodal Biometrics for Robust Identification and Authentication
The main objectives of my PhD work are to develop an efficient and robust approach for ear and face detection, data representation and finally, to construct a robust fusion approach to combine these two modalities for human recognition. We have developed a technique for ear detection from 2D profile images based on the Cascaded AdaBoost algorithm using Haar-like features. After detection of the ear region, 3D ear data is extracted from the corresponding range profile images. A fast approach for ear recognition using Local 3D Features previously is developed. Local features are used for developing a rejection classifier, to extract a minimal rectangular feature-rich region and to compute the initial transformation for the ICP algorithm. An improved technique for feature matching is also proposed using geometric consistency among the corresponding features. To combine the ear and the face biometrics, at first a score-level fusion rule is used. Then these two modalities are fused at feature level based on the similarity of the shape of the two modalities. We are currently working on improving the performance of the approach.
Multimodal biometrics is a comparatively new research area where multiple physiological or behavioral characteristics of a user are taken into consideration for identification and verification purposes. Such combination considerably minimizes limitations of the individual biometrics. Approaches proposed so far are mostly using global features and fusing multiple biometrics at the score or decision level. Our approaches with local 3D features and fusion at feature level provide improved accuracy, robustness and efficiency. Our approach for ear detection is also very fast and fully automatic. After completion of the research work, the proposed approach can be implemented for many applications such as security, access control, amusements, forensic studies and Government ID systems.