Multi-biometrics fusion (heart sound-speech authentication system)

Osama Al-Hamdani, and Ali Chekima, and Jamal Ahmad Dargham, and Sh Hussain Saleh, and Alias Mohd Noor, and Fuad Numan, (2012) Multi-biometrics fusion (heart sound-speech authentication system). In: International Symposia on Imaging and Signal Processing in Health Care and Technology, ISPHT 2012, 14-16 May 2012, Baltimore, Maryland.

Full text not available from this repository.


Biometrics recognition systems implemented in a realworld environment often have to be content with adverse biometrics signal acquisition which can vary greatly in this environment. This includes acoustic noise that can contaminate speech signals or artifacts that can alter heart sound signals. In order to overcome these recognition errors, researchers all over the world apply various methods such as normalization, feature extraction, classification to address this issue. Recently, combining biometrics modalities has proven to be an effective strategy to improve the performance of biometrics systems. The approach in this paper is based on biometrics recognition which used the heart sound signal as a feature that can't be easily copied The Mel- Frequency Cepstral Coefficient (MFCC) is used as a feature vector and vector quantization (VQ) is used as the matching model algorithm. A simple yet highly reliable method is introduced for biometric applications. Experimental results show that the recognition rate of the Heart Sound Speaker identification (HS-SI) model is 81.9% while (S-SI) the rate for the Speech Speaker Independent model is 99.3% for a 21 client, 40 imposter database. Heart sound- speaker verification (HS-SV) provides an average EER of 17.8% while the average EER for the speech speaker verification model (S-SV) is 3.39%. In order to reach a higher security level an alternative to the above approach, which is based on multimodal and a fusion technique, is implemented into the system. The best performance of the work is based on simple-sum score fusion with a pricewise-linear normalization technique which provides an EER of 0.69%.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Uncontrolled Keywords: Fusion, Picewise-liner, Speaker recognition, Vector quantization, Authentication systems, Biometric applications, Cepstral coefficients, Feature vectors, Fusion techniques, Heart sound signal, Heart sounds, Matching models, Multi-modal, Picewise-liner, Real world environments, Recognition error, Recognition rates, Recognition systems, Score fusion, Security level, Signal acquisitions, Speaker identification, Speaker independent model, Speaker recognition, Speaker verification, Speech signals, Biometrics, Cardiology, Feature extraction, Fusion reactions, Health care, Loudspeakers, Signal processing, Vector quantization, Speech recognition
Subjects: ?? TK7800-8360 ??
?? QC221-246 ??
Divisions: SCHOOL > School of Engineering and Information Technology
Depositing User: Unnamed user with email
Date Deposited: 16 Nov 2012 06:56
Last Modified: 08 Sep 2014 06:14

Actions (login required)

View Item View Item

Browse Repository
   UMS News
Quick Search

   Latest Repository

Link to other Malaysia University Institutional Repository

Malaysia University Institutional Repository