Error Level Fusion of Multimodal Biometrics
The Journal of Pattern Recognition Research (JPRR) provides an international forum for the electronic publication of high-quality research and industrial experience articles in all areas of pattern recognition, machine learning, and artificial intelligence. JPRR is committed to rigorous yet rapid reviewing. Final versions are published electronically
(ISSN 1558-884X) immediately upon acceptance.
Error Level Fusion of Multimodal Biometrics
Madasu Hanmandlu, Jyotsana Grover, Shantaram Vasikarla, Hari Mohan Gupta
JPRR Vol 6, No 2 (2011); doi:10.13176/11.314 
Download
Madasu Hanmandlu, Jyotsana Grover, Shantaram Vasikarla, Hari Mohan Gupta
Abstract
This paper presents a multimodal biometric system based on error level fusion. Two error level fusion strategies, one involving the Choquet integral and another involving the t-norms are proposed. The first strategy fully exploits the non additive aspect of the integral that accounts for the dependence or the overlapping information between the error rates FAR's and FRR's of each biometric modality under consideration. A hybrid learning algorithm using combination of Particle Swarm Optimization, Bacterial Foraging and Reinforcement learning is developed to learn the fuzzy densities and the interaction factor. The second strategy employs t-norms that require no learning. The fusion of the error rates using t-norms is not only fast but results in very good performance. This sort of fusion is a kind of decision level fusion as the error rates are derived from the decisions made on individual modalities. The experimental evaluation on two hand based datasets and two publically available datasets confirms the utility of the error level fusion. 
JPRR Vol 6, No 2 (2011); doi:10.13176/11.314 | Full Text  | Share this paper: