Dissemin is shutting down on January 1st, 2025

Published in

IOP Publishing, Journal of Neural Engineering, 3(9), p. 036013

DOI: 10.1088/1741-2560/9/3/036013

Links

Tools

Export citation

Search in Google Scholar

An auditory brain–computer interface evoked by natural speech

Journal article published in 2012 by M. A. Lopez Gordo, E. Fernandez ORCID, S. Romero, F. Pelayo, Alberto Prieto
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Brain–computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min⁻¹ in full-length trials and 2.7 bits min⁻¹ using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.