Data fusion for skin detection

Jamal Ahmad Dargham and Ali Chekima and Sigeru, Omatu and Chelsia Amy Doukim (2009) Data fusion for skin detection. Artificial Life and Robotics, 13 (2). pp. 438-441. ISSN 1433-5298


Download (44kB) | Preview


Two methods of data fusion to improve the performance of skin detection were tested. The first method fuses two chrominance components from the same color space, while the second method fuses the outputs of two skin detection methods each based on a different color space. The color spaces used are the normalized red, green, blue (RGB) color space, referred to here as pixel intensity normalization, and a new method of obtaining the R, G, and B components of the normalized RGB color space called maximum intensity normalization. The multilayer perceptron (MLP) neural network and histogram thresholding were used for skin detection. It was found that fusion of two chrominance components gives a lower skin detection error than a single chrominance component regardless of the database or the color space for both skin detection methods. In addition, the fusion of the outputs of two skin detection methods further reduces the skin detection error. © International Symposium on Artificial Life and Robotics (ISAROB). 2009.

Item Type: Article
Uncontrolled Keywords: Datafusion, Histogram thresholding, Intensity normalization, MLP neural networks, Skin detection
Subjects: T Technology > TR Photography > TR1-1050 Photography > TR624-835 Applied photography Including artistic, commercial, medical photography, photocopying processes
Divisions: SCHOOL > School of Engineering and Information Technology
Depositing User: ADMIN ADMIN
Date Deposited: 17 Mar 2011 10:27
Last Modified: 19 Oct 2017 14:53

Actions (login required)

View Item View Item