It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
In the field of brain-computer interface (BCI) based on motor imagery (MI), multi-channel electroencephalography (EEG) data is commonly utilized for MI task recognition to achieve sensory compensation or precise human-computer interaction. However, individual physiological differences, environmental variations, or redundant information and noise in certain channels can pose challenges and impact the performance of BCI systems. In this study, we introduce a channel selection method utilizing Hybrid-Recursive Feature Elimination (H-RFE) combined with residual graph neural networks for MI recognition. This channel selection method employs a recursive feature elimination strategy and integrates three classification methods, namely random forest, gradient boosting, and logistic regression, as evaluators for adaptive channel selection tailored to specific subjects. To fully exploit the spatiotemporal information of multi-channel EEG, this study employed a graph neural network embedded with residual blocks to achieve precise recognition of motor imagery. We conducted algorithm testing using the SHU dataset and the PhysioNet dataset. Experimental results show that on the SHU dataset, utilizing 73.44% of the total channels, the cross-session MI recognition accuracy is 90.03%. Similarly, in the PhysioNet dataset, using 72.5% of the channel data, the classification result also reaches 93.99%. Compared to traditional strategies such as selecting three specific channels, correlation-based channel selection, mutual information-based channel selection, and adaptive channel selection based on Pearson coefficients and spatial positions, the proposed method improved classification accuracy by 34.64%, 10.8%, 3.25% and 2.88% on the SHU dataset, and by 46.96%, 5.04%, 5.81% and 2.32% on the PhysioNet dataset, respectively.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Zhengzhou University of Light Industry, School of Computer Science and Technology, Zhengzhou, China (GRID:grid.413080.e) (ISNI:0000 0001 0476 2801)