Short Communication
Austin Public Health. 2016; 1(1): 1004.
An Investigation on Audio-Visual BCI: The Interaction Effect between Audio and Vision
Minqiang Huang, Xingyu Wang and Jing Jin*
Department of Advanced Control and Optimization for Chemical Processes, East China University of Science and Technology, China
*Corresponding author: Jing Jin, Department of Advanced Control and Optimization for Chemical Processes, East China University of Science and Technology, Ministry of Education, Shanghai, 200237, China
Received: June 29, 2016; Accepted: July 15, 2016; Published: July 18, 2016
Abstract
Brain Computer Interface (BCI) technology has been used to help disabled patients communicate or control external devices through brain activity. Recently, audio-visual Brain Computer Interfaces (BCIs) were proposed and these hybrid-modal BCIs got better performance than the single-modal BCIs. In this study, three patterns were designed to explore the relation between the auditory stimuli and the visual stimuli in an audio-visual BCI system. Reversely, the results showed that the patterns combing visual and auditory stimuli did not get better performance than pattern using visual patterns.
Keywords: Audio and vision; Visual BCI; Stimuli; Single-modal
Introduction
In recent years, BCIs relying on single modal such as visual or auditory have been developed to a bottleneck period. The visual BCIs can achieve quite high performance in items of classification accuracy and information bit rate, but visual BCIs demand that users have good sight and ability to control their eyeballs. Although auditory BCIs get rid of the visual requirements, comparing with visual BCIs, low classification accuracy of auditory BCIs is not a negligible shortcoming. To change the current situation, researchers begin to explore a hybrid-modal BCI. Mostly, hybrid-modal BCIs are focus on combing visual and auditory sense.
Method and Materials
Subjects
Eleven healthy people (7 males and 4 females, ages 24-28) participated in this experiment, which was approved by the local ethics committee. All of the participants were right handed and they were given information about the experiment (without exposing experiment intention).
Stimuli
This study used visual and auditory stimuli. Three visual stimuli were presented in Figure 1. The frequency spectrums of auditory stimuli were presented in Figure 2. Each visual stimulus had a corresponding auditory stimulus. Stimulus Onset Asynchrony (SOA) was 600ms. The presentation of each stimulus lasted 400ms. The volume of auditory stimuli was unified.
Figure 1: The visual stimuli in the experiment.
Figure 2: The frequency spectrums of auditory stimuli.
Experimental set up
The EEG signals were measured with a g.USBamp and a g.EEGcap (Guger Technologies, Graz, Austria) with a sensitivity of 100 μV, band pass filtered between 0.1Hz and 100 Hz, and sampled at 1200 Hz. 15 electrodes were chosen to record the EEG data. These electrodes were located at positions Fz, T7, C5, Cz, C6, T8, CP3, CPz, CP4, P3, Pz, P4, O1, Oz and O2, which were placed in accordance with the international 10-20 system. The electrode on the right earlobe was chosen as the reference and the frontal electrode (FPz) was chosen as the ground. The impedances were lower than 10 kΩ.
The participants sat in a comfortable chair and they were asked to try to avoid moving his\ her body in the experiment.
Three patterns were executed in this study: (1) Pattern A: auditory stimuli matched visual stimuli; (2) Pattern B: auditory stimuli mismatched visual stimuli; (3) Pattern C: only visual stimuli were presented.
Results
The grand average offline accuracy was presented in Figure 3. Obviously, Pattern C got the better performance than Pattern A and Pattern B. Pattern C achieved higher accuracy than other patterns from the first trial until the fifth trial. All patterns achieved 100% at the tenth trial. Pattern A and Pattern B had similar performance in offline classification.
Figure 3: The Grand average offline accuracy.
Discussion
In this study, the results showed that it was conditioned to improve the performance of an audio-visual BCI. The reason why Pattern A and Pattern B did not have an improvement might be concluded as follows. (1) In this experiment, visual dominance happened. Visual dominance was an effect that participants failed to respond to the auditory stimuli more often than they failed to respond to the visual stimuli in a speeded discrimination [1]. (2) When the target stimuli were off, participants still could hear the non-target auditory stimuli. According to participants’ feedback, they need time to adjust their attention to avoid being disturbed.
Conclusion
The combination of vision and audio modals need to consider how to allocate the attention on both modal. The future work is to find the optimized way to design the paradigm.
References