Farrah Wong, and Sazali Yaacob, and Nagarajan, R. (2004) Neural network-based depth computation for blind navigation. In: Conference on Two- and Three-Dimensional Vision Systems for Inspection, Control, and Metrology II, 26-27 October 2004, Philadelphia, PA.
Full text not available from this repository.
Official URL: http://dx.doi.org/10.1117/12.571629
A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and I output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.
|Item Type:||Conference Paper (UNSPECIFIED)|
|Uncontrolled Keywords:||Blind navigation, Blind individuals, Back propagation network, Stereovision, Image to sound conversion|
|Subjects:||?? TK8300-8360 ??|
|Divisions:||SCHOOL > School of Engineering and Information Technology|
|Deposited By:||IR Admin|
|Deposited On:||27 Oct 2011 15:12|
|Last Modified:||29 Dec 2014 16:06|
Repository Staff Only: item control page