The EEG-based rapid serial visual presentation (RSVP) brain-computer interface (BCI) paradigm has been widely used to achieve quickly target detection due to its potential in detecting an interest target from a rapidly refreshing letter or image stream. In these applications, single-trial RSVP detection accuracy is very important but it is often compromised by the low signal-to-noise ratio and and non-linearity of the EEG signal. In this study, we established a multi-brain collaborative BCI paradigm and designed a multi-brain cooperative RSVP detection (MB-RSVP) algorithm based on multi-variable linear regression models. Using this method, the single-trial classification accuracy of the three RSVP paradigms was estimated. Online experiment results indicated that the proposed multi-brain collaborative BCI-based RSVP framework effectively improves classification accuracy compared to single-subject RSVP while retaining real-time performance. Furthermore, offline experiment results showed that the proposed method improved the classification accuracy by 20.48% compared to single-subject RSVP experiments. The proposed multi-brain collaborative BCI-based RSVP framework provides a significant impact for driving EEG-based RSVP towards real-world applications.
KEYWORDS: Data modeling, Cross validation, Performance modeling, Electroencephalography, Education and training, Brain-machine interfaces, Deep learning, Systems modeling, Data processing, Data integration
In the field of Brain-Computer Interfaces (BCI), while research on Motor Imagery (MI) decoding models has advanced significantly, systematic studies on dataset construction and partitioning strategies remain scarce. Inappropriate data partitioning may lead to models performing well in experimental settings but underperforming in real-world applications. This study compares the impact of various EEG dataset construction and partitioning strategies on MI decoding model performance to optimize BCI systems. Utilizing the BCI Competition IV 2a dataset, we designed three data construction strategies: Intra-session training and testing, Cross-session evaluation, and Combined session evaluation. We applied four data partitioning strategies: holdout, cross-validation, shuffled cross-validation, and time series cross-validation. For model architecture, we employed the end-to-end deep learning framework EEGNet. Results indicate that increasing data volume enhances model performance. Among partitioning strategies, shuffled cross-validation showed superior performance in improving model accuracy. The holdout method also performed well with sufficient data, offering a computationally efficient alternative. These findings provide valuable insights for optimizing BCI system development and evaluation methodologies.
Deep learning based algorithms have made huge progress in the field of image classification and speech recognition. There is an increasing number of researchers beginning to use deep learning to process electroencephalographic(EEG) brain signals. However, at the same time, due to the complexity of the experimental device and the expensive collection cost, we cannot train a powerful deep learning model without enough satisfactory EEG data. Data augmentation has been considered as an effective method to eliminate this issue. We propose the Conditional Wasserstein Generative Adversarial Network with gradient penalty (CWGAN-GP) to synthesize EEG data for data augmentation. We use two public neural networks for a motor imagery task and combine the synthesized data with real EEG data to test the generated samples’ data enhancement effect. The results indicate that our model can generate high-quality artificial EEG data, which can effectively learn the features from the original EEG data. Both neural networks have gained improved classification performance, and the more complex one has obtained more significant performance improvement. The experiment provides us with new ideas for improving the performance of EEG signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.