Speaking mode recognition from functional Near Infrared Spectroscopy
by , , , ,
Abstract:
Speech is our most natural form of communication and even though functional Near Infrared Spectroscopy (fNIRS) is an increasingly popular modality for Brain Computer Interfaces (BCIs), there are, to the best of our knowledge, no previous studies on speech related tasks in fNIRS-based BCI. We conducted experiments on 5 subjects producing audible, silently uttered and imagined speech or do not produce any speech. For each of these speaking modes, we recorded fNIRS signals from the subjects performing these tasks and distinguish segments containing speech from those not containing speech, solely based on the fNIRS signals. Accuracies between 69% and 88% were achieved using support vector machines and a Mutual Information based Best Individual Feature approach. We are also able to discriminate the three speaking modes with 61% classification accuracy. We thereby demonstrate that speech is a very promising paradigm for fNIRS based BCI, as classification accuracies compare very favorably to those achieved in motor imagery BCIs with fNIRS.
Reference:
Speaking mode recognition from functional Near Infrared Spectroscopy (C. Herff, F. Putze, D. Heger, Cuntai Guan, T. Schultz), In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, 2012.
Bibtex Entry:
@INPROCEEDINGS{6346279,
author={Herff, C. and Putze, F. and Heger, D. and Cuntai Guan and Schultz, T.},
booktitle={Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE},
title={Speaking mode recognition from functional Near Infrared Spectroscopy},
year={2012},
pages={1715-1718},
abstract={Speech is our most natural form of communication and even though functional Near Infrared Spectroscopy (fNIRS) is an increasingly popular modality for Brain Computer Interfaces (BCIs), there are, to the best of our knowledge, no previous studies on speech related tasks in fNIRS-based BCI. We conducted experiments on 5 subjects producing audible, silently uttered and imagined speech or do not produce any speech. For each of these speaking modes, we recorded fNIRS signals from the subjects performing these tasks and distinguish segments containing speech from those not containing speech, solely based on the fNIRS signals. Accuracies between 69% and 88% were achieved using support vector machines and a Mutual Information based Best Individual Feature approach. We are also able to discriminate the three speaking modes with 61% classification accuracy. We thereby demonstrate that speech is a very promising paradigm for fNIRS based BCI, as classification accuracies compare very favorably to those achieved in motor imagery BCIs with fNIRS.},
keywords={brain-computer interfaces;infrared spectra;speech recognition;support vector machines;audible speech;best individual feature approach;brain computer interface;functional near infrared spectroscopy;imagined speech;mutual information;silently uttered speech;speaking mode recognition;support vector machine;Accuracy;Brain;Feature extraction;Hemodynamics;Mutual information;Spectroscopy;Speech;Adult;Corpus Callosum;Electrodes;Hemodynamics;Humans;Male;Motor Cortex;Spectroscopy, Near-Infrared;Speech},
url={https://www.csl.uni-bremen.de/cms/images/documents/publications/HerffSchultzEMBC_2012.pdf},
doi={10.1109/EMBC.2012.6346279},
ISSN={1557-170X},
month={Aug},}