TY - JOUR
T1 - Inferring imagined speech using EEG signals
T2 - A new approach using Riemannian manifold features
AU - Nguyen, Chuong H.
AU - Karavas, George K.
AU - Artemiadis, Panagiotis
N1 - Funding Information:
This work is supported by the US Defense Advanced Research Projects Agency (DARPA) grant D14AP00068 and US Air Force Office of Scientific Research (AFOSR) award FA95501410149.
PY - 2018/2
Y1 - 2018/2
N2 - Objective. In this paper, we investigate the suitability of imagined speech for brain-computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.
AB - Objective. In this paper, we investigate the suitability of imagined speech for brain-computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.
KW - BCI
KW - EEG
KW - relevance vector machines
KW - speech imagery
UR - http://www.scopus.com/inward/record.url?scp=85040688289&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040688289&partnerID=8YFLogxK
U2 - 10.1088/1741-2552/aa8235
DO - 10.1088/1741-2552/aa8235
M3 - Article
C2 - 28745299
AN - SCOPUS:85040688289
SN - 1741-2560
VL - 15
JO - Journal of neural engineering
JF - Journal of neural engineering
IS - 1
M1 - 016002
ER -