Decoding Multisensory Attention from Electroencephalography for Use in a Brain-Computer Interface

Brain-computer interfaces (BCIs) offer a non-verbal and covert way for humans to interact with a machine. They are designed to interpret user’s brain state that can be translated into action or other communication purposes. While most previous BCI studies focus on motor imagery or steady-state evoked potential paradigms, we investigated the feasibility of a BCI system using a task-based paradigm that can potentially be integrated in a more user-friendly and engaging manner. The user was presented with multiple simultaneous, spatially separated streams of auditory and / or tactile stimuli and directed to detect a pattern in one particular stream. We applied a model-free method to decode the stream tracking effort from the EEG signal. The results showed that the proposed BCI system could capture attention from most of the participants using multisensory inputs. Successful decoding in real-time would allow the user to communicate a “control” signal to the computer via attention.

[SLIDES]

Speaker Details

Wenkang An (Winko) is a 3rd year PhD student in the Department of Electrical and Computer Engineering at Carnegie Mellon University. He is working with Prof. Barbara Shinn-Cunningham, and his thesis project aims at using multi-modal neuroimaging (EEG & fMRI) and machine learning methods to investigate auditory selective attention. His research interests include neural engineering, human computer interaction, and wearable device design.

Date:
Speakers:
Wenkang An
Affiliation:
Carnegie Mellon University

Watch Next