Parameter tuning for enhancing inter-subject emotion classification in four classes for vr-eeg predictive analytics

azmi Sofian Suhaimi, and James Mountstephens, and Jason Teo, (2020) Parameter tuning for enhancing inter-subject emotion classification in four classes for vr-eeg predictive analytics. International Journal of Advanced Science and Technology, 29 (6s). p. 1483.

[img]
Preview
Text
Parameter tuning for enhancing inter-subject emotion classification in four classes for vr-eeg predictive analytics.pdf

Download (393kB) | Preview

Abstract

The following research describes the potential in classifying emotions using wearable EEG headset while using a virtual environment to stimulate the responses of the users. Current developments on emotion classification have always steered towards the use of a clinical-grade EEG headset with a 2D monitor screen for stimuli evocations which may introduce additional artifacts or inaccurate readings into the dataset due to users unable to provide their full attention from the given stimuli even though the stimuli presentated should have been advantageous in provoking emotional reactions. Furthermore, the clinical-grade EEG headset requires a lengthy duration to setup and avoiding any hindrance such as hairs hindering the electrodes from collecting the brainwave signals or electrodes coming loose thus requiring additional time to work to fix the issue. With the lengthy duration of setting up the EEG headset, the user may expereince fatigue and become incapable of responding naturally to the emotion being presented from the stimuli. Therefore, this research introduces the use of a wearable low-cost EEG headset with dry electrodes that requires only a trivial amount of time to set up and a Virtual Reality (VR) headset for the presentation of the emotional stimuli in an immersive VR environment which is paired with earphones to provide the full immersive experience needed for the evocation of the emotion. The 360 video stimuli are designed and stitched together according to the arousal-valence space (AVS) model with each quadrant having an 80-second stimuli presentation period followed by a 10-second rest period in between quadrants. The EEG dataset is then collected through the use of a wearable low-cost EEG using four channels located at TP9, TP10, AF7, AF8. The collected dataset is then fed into the machine learning algorithms, namely KNN, SVM and Deep Learning with the dataset focused on inter-subject test approaches using 10-fold cross-validation. The results obtained found that SVM using Radial Basis Function Kernel 1 achieved the highest accuracy at 85.01%. This suggests that the use of a wearable low-cost EEG headset with a significantly lower resolution signal compared to clinical-grade equipment which utilizes only a very limited number of electrodes appears to be highly promising as an emotion classification BCI tool and may thus spur up open up myriad practical, affordable and cost-friendly solutions in applying to the medical, education, military, and entertainment domains.

Item Type: Article
Uncontrolled Keywords: Machine Learning, Electroencephalography, Emotion Classification, Virtual Reality, Wearable Technology
Subjects: T Technology > TJ Mechanical engineering and machinery
Divisions: FACULTY > Faculty of Engineering
Depositing User: Noraini
Date Deposited: 22 Jul 2020 03:39
Last Modified: 22 Jul 2020 03:39
URI: http://eprints.ums.edu.my/id/eprint/25668

Actions (login required)

View Item View Item