This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In recent decades, the brain-computer interface (BCI) has emerged as a leading area of research. BCI systems transform human intentions into actuating signals, allowing an external device like a robot, exoskeleton, wheelchair, neuroprosthesis, or any assistive technology to perform a specific task [1–3]. The most effective use of BCIs is to assist individuals with motor impairments. However, they are also widely applied in other applications such as gaming, drone control, therapy, and neuroergonomics [1, 4]. Based on the signal acquisition method, the BCIs are divided into two classes: (i) invasive and (ii) noninvasive. Noninvasive BCIs are widely utilized because of the benefits of not requiring surgery, better portability, ease of measurement, and safety [5].
BCI comprises both software and hardware, and the primary processing steps are as follows: (i) signal acquisition, (ii) preprocessing, (iii) feature extraction, and (iv) classification [6, 7]. In addition to the aforementioned processing steps, neural feedback (i.e., audio, video, or stimulation) also plays a vital role in designing BCI systems [8, 9]. Currently, the most frequently used imaging modalities for noninvasive BCIs include electroencephalogram (EEG) [10, 11], functional magnetic resonance imaging [12, 13], and functional near-infrared spectroscopy (fNIRS) [14, 15]. Each modality has its advantages and disadvantages; however, fNIRS is relatively new and has the benefits of moderate temporal and spatial resolution [16–18]. fNIRS utilizes the pairs of multiple near-infrared lights in the 650–1000 nm range that pass through the superficial cortical regions to quantify both absolute and relative concentration changes of oxyhemoglobin (HbO or ∆HbO) and deoxyhemoglobin (HbR or ∆HbR) [14, 19–21]. The stimuli to a brain activate the neocortex, increasing the blood flow and volume as well as the oxygenations (∆HbO or ∆HbR) in the regional neocortex. The change in ∆HbO or ∆HbR can be translated into a useful command for fNIRS-BCI applications.
In BCI, three essential research issues are (i) enhancement in classification accuracy, (ii) the number of decoded commands, and (iii) fast BCI using quick command decoding. In the case of classification accuracy, the selection of appropriate channels and features plays a vital role [22–24]. For active channel selection, averaging over all channels [25–27], averaging over a region of interest [28, 29], t- and z- statistics [29–32], baseline correction [33], vector-phase analysis [31, 34–36], Pearson correlation coefficient [37], the contrast-to-noise ratio [38], LASSO homotopy-based sparse representation [39], and joint-channel-connectivity [40] methods are employed in fNIRS-based BCI studies. Temporal statistical characteristics of fNIRS signals time series (i.e., mean, slope, peak, minimum value, skewness, kurtosis, variance, and standard deviation) are the most commonly used features [6]. In contrast, other features include general linear model coefficients [41], Mel frequency cepstral coefficients [42], graph-based features [43], vectors, phase and magnitude of vector phase analysis [34], and frequency-domain features [44]. In the case of increasing the decoded commands from the brain, fNIRS is hybridized with other modalities. So far, a number of commands have been generated by concurrently measuring fNIRS and EEG signals [6, 33, 45, 46]. Finally, the third issue is to generate the command in minimum time or delay. Different time window sizes 0∼2, 0∼2.5, 0∼5, 2∼7, 5∼10, 0∼10, 0∼15, and 0∼20 have been utilized to extract features for the training and testing of the classifier [29, 32, 33, 36, 47]. The first issue, improving classification accuracy, is addressed in this study.
Selecting appropriate channels and features will enhance classification accuracy [22, 24]. Previous studies used different 2- and 3-feature combinations of temporal statistical features to determine the optimal feature combination for classifying various activities [34, 48]. However, the manual selection of features is a difficult task, primarily when all channels are used for feature extraction and classification. Noori et al. [49] employed a genetic algorithm with a support-vector machine (SVM) to find the optimal feature combination in various temporal window sizes and achieve higher classification accuracy. Aydin [50] proposed a more systematic approach to select the appropriate subject-specific features. Five temporal statistical features (i.e., mean, slope, maximum, skewness, and kurtosis) were extracted from all channels for the classification process. He demonstrated that the stepwise regression analysis based on sequential feature selection and ReliefF algorithm-based subject-specific features significantly enhanced the individual accuracies. Similarly, the genetic algorithm and the ReliefF algorithm were used to select the optimal features for the upper limb movement intention [51]. In reference [52], the authors employed a nondominating multiobjective genetic algorithm for channel selection, and then, optimal features were selected using the minimum redundancy maximum relevance (mRMR) algorithm. They could only obtain 67.9% average classification accuracy for three-class fNIRS signals. Sparse representation classification was also used to identify the appropriate features [53]. However, further research is still required to improve the subject-specific classification accuracy for multiclass activities for fNIRS-based BCI applications.
This study proposes a graph convolutional network (GCN) for selecting the appropriate/relevant channels for fNIRS-based BCI applications. Five temporal statistical features (i.e., mean, slope, maximum, skewness, and kurtosis) are extracted from 10 s segmented fNIRS signals of the selected channels obtained from the GCN. Furthermore, two filter-based feature selection algorithms, mRMR and ReliefF, are tested for further reduction in feature vector size. The support vector machine (SVM) is used for the training and validation. The proposed methodology is validated using the online benchmark dataset of motor imagery (left and right-hand) and mental arithmetic tasks (mental arithmetic vs. baseline). Finally, the efficiency of the proposed methodology is tested for four-class BCIs (i.e., left and right-hand motor imagery, mental arithmetic, and baseline).
2. Methods
Figure 1 shows the proposed methodology for selecting appropriate channels and features for fNIRS-based BCI applications. The details of the framework are discussed in the subsequent sections.
[figure(s) omitted; refer to PDF]
2.1. Experimental Data and Preprocessing
For validating the proposed methodology, fNIRS data of twenty-nine subjects from the online available open-access EEG + NIRS data are utilized [54]. The dataset consists of left-hand motor imagery (LHMI), right-hand motor imagery (RHMI), mental arithmetic (MA), and baseline task data. A total of 36 fNIRS channels (i.e., 9 for the prefrontal region, 12 around the left and right motor regions, and 3 for the occipital region) were placed around Fp1, Fp2, Fpz, C3, C4, and Oz according to the 10-5 International system using fourteen sources and sixteen detectors with 3 cm distance. The dataset contains fNIRS and triggered data from six sessions with ten trials for each activity (i.e., 30 trials of each activity). Each session starts with a prerest period of 1 min, 20 trials (10 for each activity), and 1 min postrest. A 2 s visual introduction was followed by a 10 s task phase, with a rest period randomly allocated between 15 s and 17 s during the activity duration. The dataset is divided into two parts: dataset A (i.e., LHMI and RHMI with sessions 1, 3, and 5) and dataset B (MA and RS with sessions 2, 4, and 6). In LHMI and RHMI tasks, the participants were instructed to visualize the opening and closing of their hands as they were grabbing a ball. For MA tasks, the subjects performed subtraction between a three-digit number and a one-digit number, whereas the subjects were instructed to rest during the baseline task. The obtained data of ∆HbO and ∆HbR were preprocessed using the 3rd order low-pass filter with 0.1 Hz and high-pass filter with 0.01 Hz cutoff frequency to minimize the physiological noises of respiration, cardiac, and low-frequency drift. In this study, only ∆HbO data were used for subsequent analysis, as many previous studies have shown that the ∆HbO is a more sensitive and reliable indicator [31, 36, 55]. For further details about the fNIRS configuration and data acquisition, see reference [54].
2.2. Graph Convolutional Network Theory
A graph
Erb [58] utilized the graph structural data to propose the spectral graph theory. Xu et al. [59] used the Laplacian matrix to create the graph convolution and filter. The nodes of the graph’s signals can be modeled as follows:
The Hadamard product
By using the graph filter theory, it can be expressed as follows:[59]:
This is referred to as a graph-based information propagation paradigm. This model is used to incorporate the fNIRS graph data.
2.3. Optimal fNIRS Channel Selection Using GCN
The brain graph network may be utilized to visualize the functional connectivity (FC) between brain signals. This study presented an optimal fNIRS channel selection method that combines GCN with neuroscientific knowledge. Figure 2 shows an illustration of the channel selection using GCN.
[figure(s) omitted; refer to PDF]
In this study, all fNIRS channels are collected as a graph, with each channel viewed as a node. The connection between all pairs of fNIRS channels is measured by the Pearson correlation coefficient (PCC), which is used to develop the graph model of the data. The following equation can be used to compute the PCC:
2.4. Feature Extraction
Numerous features have been reported in the fNIRS literature for diverse tasks. The most popular features of an fNIRS-BCI are covered in this section. The ∆HbO and ∆HbR activity in a person’s brain can be used to assess various features. The filtered ∆HbO and ∆HbR data were segmented into 10 s windows [54]. The most often utilized temporal features, such as mean, peak, slope, skewness, and kurtosis, are extracted [6, 69]. After applying GCN, the temporal characteristics of the selected/optimal channels were computed to establish the hybrid feature vector.
2.4.1. Mean of the Signal
The following calculation is made for the signal means of ΔHbO and ΔHbR:
2.4.2. Peak of the Signal
The maximum signal value inside a specific window is the peak feature of the signal. According to various research studies, peak values are one of the most compelling features in fNIRS studies [70, 71].
2.4.3. Slope of the Signal
This approach involves calculating the slope by directly determining the values at the beginning and end of a predefined timeframe (for instance, between 0 and 10 seconds from the stimulus’s onset time) [71].
2.4.4. Skewness of the Signal
The degree to which the signal values around their mean are not symmetrical when compared to a normal distribution is known as skewness. The following equation can be used to compute the skewness of the fNIRS signal:
2.4.5. Kurtosis of the Signal
Kurtosis measures the signal value distribution’s peaking or convexity relative to the normal distribution. The following equation can be used to compute the kurtosis of the fNIRS signal:
After computing all five features mentioned above, normalization was applied.
2.5. Optimal Feature Selection
The most crucial component of all learning-based techniques is the quality of the feature. Furthermore, it is also crucial to know how many features are employed in machine learning algorithms to represent the task. Under these circumstances, it is challenging for machine learning algorithms to execute effectively. A few irrelevant features only offer scant information for classification, which ends up with low accuracy. Therefore, to improve the performance, just the tiniest fraction of important features that best classify the target classes should be used to train the model. As a result, the minimum redundancy maximum relevance (mRMR) [72] and ReliefF [73] approaches have been applied to lower the computation costs and boost accuracy.
2.5.1. Minimum Redundancy Maximum Relevance (mRMR)
The main goal of the mRMR approach is to increase how closely variables and characteristics are correlated. In addition, the correlation between each feature is intended to be minimized. Mutual information is utilized in the mRMR approach to assess how similar two variables are. If
2.5.2. ReliefF
The ReliefF algorithm, which Kononenko [73] has proposed, is an adaptation of the Relief algorithm [75] that was motivated by instance-based learning. The ReliefF method functions well in a chaotic setting. Figure 3 illustrates how the ReleifF algorithm works.
[figure(s) omitted; refer to PDF]
A technique based on the kNN algorithm is used by the ReliefF algorithm. It initializes the nearest neighbor (k), iteration count (i), weight vector (W[A]), and iteration feature count (j). The algorithms randomly choose the sample (R) from the feature vector in each iteration. Then, it selects the k closest samples for the same feature vector sample. The weight vector was then calculated utilizing the following equation:
Additional information regarding how the ReliefF algorithm functions can be found in references [76–78].
This study uses the conventional SVM as a classifier owing to its high computational efficiency. SVM, as a classifier, determines the best discriminative hyperplane that optimizes the margin between the classes and yields the best classification accuracy. Ten-fold cross-validation is employed for the validation of the proposed methodology.
3. Results
All the processing was done using MATLAB 2022b. The following three cases were investigated: (i) two-class LHMI vs. RHMI classification, (ii) two-class MA vs. baseline classification, and (iii) four-class LHMI vs. RHMI vs. MA vs. baseline classification. In the case of all channels, a total of 180 features were extracted (i.e., five temporal statistical features x 36 channels) for each activity. GCN was first applied to find the best-correlated channels of each activity for the individual subjects, and then, feature selection algorithms were applied for further reduction in the feature vector. Table 1 shows the selected channels for each defined case. Only channels showing high probabilities after GCN were selected for uniform feature vector size.
Table 1
Number of channels for various activities.
Subject | All channels | GCN-based selected channels for LHMI vs. RHMI | GCN-based selected channels for MA vs. baseline | GCN-based selected channels for LHMI vs. RHMI vs. MA vs. baseline |
1 | 36 | 16 | 14 | 14 |
2 | 15 | 14 | 14 | |
3 | 9 | 11 | 9 | |
4 | 13 | 14 | 13 | |
5 | 16 | 13 | 13 | |
6 | 11 | 10 | 10 | |
7 | 17 | 15 | 15 | |
8 | 17 | 17 | 17 | |
9 | 11 | 15 | 11 | |
10 | 6 | 2 | 2 | |
11 | 12 | 11 | 11 | |
12 | 18 | 18 | 18 | |
13 | 13 | 12 | 12 | |
14 | 18 | 15 | 15 | |
15 | 10 | 9 | 9 | |
16 | 16 | 15 | 15 | |
17 | 13 | 12 | 12 | |
18 | 16 | 4 | 4 | |
19 | 14 | 18 | 14 | |
20 | 15 | 14 | 14 | |
21 | 11 | 2 | 2 | |
22 | 7 | 6 | 6 | |
23 | 13 | 10 | 10 | |
24 | 15 | 17 | 15 | |
25 | 12 | 10 | 10 | |
26 | 15 | 17 | 15 | |
27 | 15 | 11 | 11 | |
28 | 18 | 16 | 16 | |
29 | 15 | 8 | 8 |
The classification accuracies obtained using all and GCN-based selected channels are shown in Figure 4. It can be observed that the selection of appropriate channels will lead to a significant increase in the classification accuracy for all cases. The average classification accuracy increases from 62.2% (all channels) to 87% (GCN-based selected channels) for LHMI and RHMI, from 75.6% (all channels) to 90.2% (GCN-based selected channels) for MA and baseline, and from 45.4% (all channels) to 77.8% (GCN-based selected channels) for four-class, respectively.
[figure(s) omitted; refer to PDF]
Figures 5–7 depict the number of features selected using the feature selection algorithms mRMR and ReliefF and their classification accuracies for each case. Using the feature selection algorithm after the GCN-based channel selection, the number of features reduces significantly with a further increase in the classification accuracy for each case. Both mRMR and Relief substantially reduce redundant features. However, the classification accuracy obtained using ReliefF is more stable. The average classification accuracy of 87.8%, 87.1%, and 78.7% was obtained using mRMR for LHMI vs. RHMI, MA vs. baseline, and four-class, respectively. Similarly, for ReliefF, an average classification accuracy of 90.7%, 93.7%, and 82.5% was obtained for LHMI vs. RHMI, MA vs. baseline, and four-class, respectively.
[figure(s) omitted; refer to PDF]
To further justify the results obtained for the four-class classification of LHMI, RHMI, MA, and baseline, Figure 8 shows the confusion matrices of different subjects and the number of correctly classified trials for each activity. It can be seen that nearly 24 trials are correctly classified for each class, showing the efficiency of the proposed framework.
[figure(s) omitted; refer to PDF]
Finally, depending upon the normality of the data, a two-sample t-test and Wilcoxon rank sum test are used to check the statistical significance of the obtained accuracy among different cases. Figure 9 depicts the statistical significance of the various cases’ accuracy.
[figure(s) omitted; refer to PDF]
4. Discussion
For BCI applications, selecting appropriate channels and features plays a vital role in the system’s overall performance. In this study, enhancement in classification accuracy, one of the important research issues in fNIRS-based BCI, is pursued. The novelties of this paper are (i) selection of the best subject-dependent correlated channels using the GCN and (ii) reduction of uncorrelated features and selection of subject-dependent features with filter-based feature reduction techniques (i.e., ReliefF and mRMR).
The first phase of this study uses the GCN-based technique to choose the best channels for a particular task and subject. Several previous studies have shown channel and feature selection techniques can improve the performance of fNIRS-BCI [34, 48–52]. The researchers used all channels, averaging overall all channels, averaging over a specific region, t- and z- statistics, and other techniques to select the active channels for fNIRS-based BCI studies. As activity appears in specific brain regions and channels, using all channels (i.e., activity- and nonactivity-related channels), averaging overall channels and specific regions will not yield high classification accuracy [22]. It is also evident from the results that the selected activity-related channels have higher classification accuracy
In previous studies, researchers formed a feature subset by combining 2- and 3-combinations of temporal features [34, 48] and using the redundant feature reduction method based on mutual information [79]. The most important finding of these studies is that individual participants may have various combinations of optimal features [50]. Therefore, it is essential to identify the subject-specific best feature subsets to obtain a higher classification performance. In the second phase of this study, the redundant features of the chosen channels were also eliminated using filter-based feature reduction methodologies (mRMR and ReliefF). In a previous study [50], the author used feature reduction methods to reduce the subject-specific features of fNIRS data using all channels. However, the information collected from all the channels is not useful for each task. Therefore, it is also necessary to investigate the channel selection for a particular task and subject before using feature reduction methodologies. Figures 5–7 depict that feature reduction approaches are highly beneficial for shortening computation time and improving classification performance. Compared to ReliefF, the mRMR employed fewer feature subsets during model training and had a reasonable performance rate. However, statistical analysis shows that the classification accuracy obtained using GCN and GCN-mRMR is not statistically significant
Table 2
Comparison of the proposed framework with the studies that used the same dataset.
Studies | Methodology | Two-class LHMI vs. RHMI (%) | Two-class MA vs. baseline (%) | Four-class LHMI vs. RHMI vs. MA vs. baseline (%) |
[30] | Channels: z-score | 87.2 (LHMI), 88.4 (RHMI) | 88.1 | — |
Features: mean, peak, and slope | ||||
[54] | Channels: all | 63.5 | 83.6 | — |
Features: average value and average slope | ||||
[80] | Channels: all | 70.14 | 84.94 | — |
Features: Hilbert transform and sum derivative | ||||
[81] | Independent decision path fusion | 65.86 | 82.76 | — |
[50] | Channels: all | 77.41 | 86.83 | — |
Features: ReliefF (mean, slope, maximum, skewness, and kurtosis) | ||||
This study | Channels: GCN | 87.8 | 87.1 | 78.7 |
Features: mRMR (mean, slope, peak, skewness, and kurtosis) | ||||
This study | Channels: GCN | 90.7 | 93.7 | 81.6 |
Features: ReliefF (mean, slope, peak, skewness, and kurtosis) |
It is clear from Table 2 that the proposed framework has better classification performance compared to other studies, and it may be helpful for the BCI application. The limitation of this study is that only ∆HbO was used for analysis. Therefore, in the future, the analysis using ∆HbR, cerebral oxygen exchange (∆COE), and cerebral blood volume (∆CBV) can be investigated to check the classification performance further. Second, only 10 s window and five temporal features were used. Other window sizes and features may be further explored in the future.
5. Conclusion
In this work, we used GCN and filters (mRMR and ReliefF) to successfully classify the different activities of the brain using fNIRS signals. Many other research studies extracted the temporal features of all fNIRS channels and employed feature reduction methods to classify brain activities. In this work, the channels’ correlation was used to construct the GCN model and find the appropriate channel for a particular activity and specific subject. The GCN-based channels show improvements in classification accuracy as compared to full channels. ReliefF and mRMR techniques were employed to remove redundant features and further enhance classification efficiency. The fNIRS online dataset of motor imagery (left- and right-hand), mental arithmetic, and baseline tasks was used to validate the proposed framework. The results of both mRMR (i.e., 87.8% for motor imagery and 87.1% for mental arithmetic, and 78.7% for four-class) and ReiefF (i.e., 90.7% for motor imagery, 93.7% for mental arithmetic, and 81.6% for four-class) yielded higher average classification accuracy for all subjects. However, the ReliefF method outperforms mRMR in terms of stability and average accuracy, which is also justified by performing statistical analysis
Authors’ Contributions
Conceptualization was carried out by Amad Zafar and Muhammad Umair Ali; formal analysis was carried out by Muhammad Umair Ali; funding acquisition was carried out by Kwang Su Kim; investigation was conducted by Amad Zafar and M. Atif Yaqub; methodology was carried out by Amad Zafar and Muhammad Umair Ali; software development was looked after by M. Atif Yaqub; supervision was done by Kwang Su Kim; validation was done by Karam Dad Kallu; Amad Zafar and Muhammad Umair Ali wrote the original draft; Jong Hyuk Byun, Min Yoon, and Kwang Su Kim reviewed and edited the paper.
Acknowledgments
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2022R1C1C2003637)(to K.S.K.).
[1] M. Zabcikova, Z. Koudelkova, R. Jasek, J. J. Lorenzo Navarro, "Recent advances and current trends in brain‐computer interface research and their applications," International Journal of Developmental Neuroscience, vol. 82 no. 2, pp. 107-123, DOI: 10.1002/jdn.10166, 2022.
[2] D. J. McFarland, J. R. Wolpaw, "Brain-computer interface operation of robotic and prosthetic devices," Computer, vol. 41 no. 10, pp. 52-56, DOI: 10.1109/mc.2008.409, 2008.
[3] G. Müller-Putz, R. Leeb, M. Tangermann, J. Hohne, A. Kubler, F. Cincotti, D. Mattia, R. Rupp, K. R. Muller, JdR. Millan, "Towards noninvasive hybrid brain–computer interfaces: framework, practice, clinical application, and beyond," Proceedings of the IEEE, vol. 103 no. 6, pp. 926-943, DOI: 10.1109/jproc.2015.2411333, 2015.
[4] Z. Liu, J. Shore, M. Wang, F. Yuan, A. Buss, X. Zhao, "A systematic review on hybrid EEG/fNIRS in brain-computer interface," Biomedical Signal Processing and Control, vol. 68,DOI: 10.1016/j.bspc.2021.102595, 2021.
[5] S. Waldert, "Invasive vs. non-invasive neuronal signals for brain-machine interfaces: will one prevail?," Frontiers in Neuroscience, vol. 10,DOI: 10.3389/fnins.2016.00295, 2016.
[6] K.-S. Hong, M. J. Khan, M. J. Hong, "Feature extraction and classification methods for hybrid fNIRS-EEG brain-computer interfaces," Frontiers in Human Neuroscience, vol. 12,DOI: 10.3389/fnhum.2018.00246, 2018.
[7] L. Naci, M. M. Monti, D. Cruse, A. Kubler, B. Sorger, R. Goebel, B. Kotchoubey, A. M. Owen, "Brain–computer interfaces for communication with nonresponsive patients," Annals of Neurology, vol. 72 no. 3, pp. 312-323, DOI: 10.1002/ana.23656, 2012.
[8] J. D. Rieke, A. K. Matarasso, M. M. Yusufali, A. Ravindran, J. Alcantara, K. D. White, J. J. Daly, "Development of a combined, sequential real-time fMRI and fNIRS neurofeedback system to enhance motor learning after stroke," Journal of Neuroscience Methods, vol. 341,DOI: 10.1016/j.jneumeth.2020.108719, 2020.
[9] L. F. Nicolas-Alonso, J. Gomez-Gil, "Brain computer interfaces, a review," Sensors, vol. 12 no. 2, pp. 1211-1279, DOI: 10.3390/s120201211, 2012.
[10] R. Abiri, S. Borhani, E. W. Sellers, Y. Jiang, X. Zhao, "A comprehensive review of EEG-based brain–computer interface paradigms," Journal of Neural Engineering, vol. 16 no. 1,DOI: 10.1088/1741-2552/aaf12e, 2019.
[11] M. Rashid, N. Sulaiman, A. Pp Abdul Majeed, R. M. Musa, A. F. Ab Nasir, B. S. Bari, S. Khatun, "Current status, challenges, and possible solutions of EEG-based brain-computer interface: a comprehensive review," Frontiers in Neurorobotics, vol. 14 no. 25, pp. 10-3389, DOI: 10.3389/fnbot.2020.00025, 2020.
[12] N. Weiskopf, "Real-time fMRI and its application to neurofeedback," NeuroImage, vol. 62 no. 2, pp. 682-692, DOI: 10.1016/j.neuroimage.2011.10.009, 2012.
[13] A. Tursic, J. Eck, M. Lührs, D. E. Linden, R. Goebel, "A systematic review of fMRI neurofeedback reporting and effects in clinical populations," NeuroImage: Clinical, vol. 28,DOI: 10.1016/j.nicl.2020.102496, 2020.
[14] K.-S. Hong, A. Zafar, "Existence of initial dip for BCI: an illusion or reality," Frontiers in Neurorobotics, vol. 12,DOI: 10.3389/fnbot.2018.00069, 2018.
[15] M. A. Yücel, J. J. Selb, T. J. Huppert, M. A. Franceschini, D. A. Boas, "Functional near infrared spectroscopy: enabling routine functional brain imaging," Current opinion in biomedical engineering, vol. 4, pp. 78-86, DOI: 10.1016/j.cobme.2017.09.011, 2017.
[16] V. Quaresima, M. Ferrari, "Functional near-infrared spectroscopy (fNIRS) for assessing cerebral cortex function during human behavior in natural/social situations: a concise review," Organizational Research Methods, vol. 22 no. 1, pp. 46-68, DOI: 10.1177/1094428116658959, 2019.
[17] D. A. Boas, C. E. Elwell, M. Ferrari, G. Taga, Twenty Years of Functional Near-Infrared Spectroscopy: Introduction for the Special Issue, vol. 85, 2014.
[18] K. S. Hong, M. A. Khan, U. Ghafoor, H. R. Yoo, "Acupuncture enhances brain function in patients with mild cognitive impairment: evidence from a functional-near infrared spectroscopy study," Neural regeneration research, vol. 17 no. 8,DOI: 10.4103/1673-5374.332150, 2022.
[19] F. Scholkmann, S. Kleiser, A. J. Metz, R. Zimmermann, J. Mata Pavia, U. Wolf, M. Wolf, "A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology," NeuroImage, vol. 85,DOI: 10.1016/j.neuroimage.2013.05.004, 2014.
[20] P. Pinti, F. Scholkmann, A. Hamilton, P. Burgess, I. Tachtsidis, "Current status and issues regarding pre-processing of fNIRS neuroimaging data: an investigation of diverse signal filtering methods within a general linear model framework," Frontiers in Human Neuroscience, vol. 12,DOI: 10.3389/fnhum.2018.00505, 2018.
[21] P. Pinti, I. Tachtsidis, A. Hamilton, J. Hirsch, C. Aichelburg, S. Gilbert, P. W. Burgess, "The present and future use of functional near‐infrared spectroscopy (fNIRS) for cognitive neuroscience," Annals of the New York Academy of Sciences, vol. 1464 no. 1,DOI: 10.1111/nyas.13948, 2020.
[22] A. Zafar, U. Ghafoor, M. A. Yaqub, K.-S. Hong, "Initial-dip-based classification for fNIRS-BCI," Neural Imaging and Sensing, vol. 10865, pp. 116-124, DOI: 10.1117/12.2511595, 2019.
[23] M. Mumtaz Zahoor, S. A. Qureshi, S. Hussain Khan, A. Khan, U. Ghafoor, M. R. Bhutta, "A new deep hybrid boosted and ensemble learning-based brain tumor analysis using MRI," Sensors, vol. 22 no. 7,DOI: 10.3390/s22072726, 2022.
[24] M. Asam, S. H. Khan, A. Akbar, S. Bibi, T. Jamal, A. Khan, U. Ghafoor, M. R. Bhutta, "IoT malware detection architecture using a novel channel boosted and squeezed CNN," Scientific Reports, vol. 12 no. 1, pp. 15498-15512, DOI: 10.1038/s41598-022-18936-9, 2022.
[25] N. Naseer, K. S. Hong, "Classification of functional near-infrared spectroscopy signals corresponding to the right-and left-wrist motor imagery for development of a brain–computer interface," Neuroscience Letters, vol. 553, pp. 84-89, DOI: 10.1016/j.neulet.2013.08.021, 2013.
[26] N. Naseer, M. J. Hong, K.-S. Hong, "Online binary decision decoding using functional near-infrared spectroscopy for the development of brain–computer interface," Experimental Brain Research, vol. 232 no. 2, pp. 555-564, DOI: 10.1007/s00221-013-3764-1, 2014.
[27] F. Scarpa, S. Brigadoi, S. Cutini, P. Scatturin, M. Zorzi, R. Dell’Acqua, G. Sparacino, "A reference-channel based methodology to improve estimation of event-related hemodynamic response from fNIRS measurements," NeuroImage, vol. 72, pp. 106-119, DOI: 10.1016/j.neuroimage.2013.01.021, 2013.
[28] S. Zhang, Y. Zheng, D. Wang, L. Wang, J. Ma, J. Zhang, W. Xu, D. Li, D. Zhang, "Application of a common spatial pattern-based algorithm for an fNIRS-based motor imagery brain‐computer interface," Neuroscience Letters, vol. 655, pp. 35-40, DOI: 10.1016/j.neulet.2017.06.044, 2017.
[29] M. J. Khan, K.-S. Hong, "Passive BCI based on drowsiness detection: an fNIRS study," Biomedical Optics Express, vol. 6 no. 10, pp. 4063-4078, DOI: 10.1364/boe.6.004063, 2015.
[30] H. Nazeer, N. Naseer, A. Mehboob, M. J. Khan, R. A. Khan, U. S. Khan, Y. Ayaz, "Enhancing classification performance of fNIRS-BCI by identifying cortically active channels using the z-score method," Sensors, vol. 20 no. 23,DOI: 10.3390/s20236995, 2020.
[31] A. Zafar, K. S. Hong, "Neuronal activation detection using vector phase analysis with dual threshold circles: a functional near-infrared spectroscopy study," International Journal of Neural Systems, vol. 28 no. 10,DOI: 10.1142/s0129065718500314, 2018.
[32] K. S. Hong, H. Santosa, "Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy," Hearing Research, vol. 333, pp. 157-166, DOI: 10.1016/j.heares.2016.01.009, 2016.
[33] M. J. Khan, K. S. Hong, "Hybrid EEG–fNIRS-based eight-command decoding for BCI: application to quadcopter control," Frontiers in Neurorobotics, vol. 11,DOI: 10.3389/fnbot.2017.00006, 2017.
[34] H. Nazeer, N. Naseer, R. A. Khan, F. M. Noori, N. K. Qureshi, U. S. Khan, M. J. Khan, "Enhancing classification accuracy of fNIRS-BCI using features acquired from vector-based phase analysis," Journal of Neural Engineering, vol. 17 no. 5,DOI: 10.1088/1741-2552/abb417, 2020.
[35] A. Zafar, K.-S. Hong, "Reduction of onset delay in functional near-infrared spectroscopy: prediction of HbO/HbR signals," Frontiers in Neurorobotics, vol. 14 no. 10,DOI: 10.3389/fnbot.2020.00010, 2020.
[36] A. Zafar, K.-S. Hong, "Detection and classification of three-class initial dips from prefrontal cortex," Biomedical Optics Express, vol. 8 no. 1, pp. 367-383, DOI: 10.1364/boe.8.000367, 2017.
[37] M. A. H. Hasan, M. U. Khan, D. Mishra, "A computationally efficient method for hybrid EEG-fNIRS BCI based on the Pearson correlation," BioMed Research International, vol. 2020,DOI: 10.1155/2020/1838140, 2020.
[38] J. Lee, N. Mukae, J. Arata, K. Iihara, M. Hashizume, "Comparison of feature vector compositions to enhance the performance of NIRS-BCI-triggered robotic hand orthosis for post-stroke motor recovery," Applied Sciences, vol. 9 no. 18,DOI: 10.3390/app9183845, 2019.
[39] A. Gulraiz, N. Naseer, H. Nazeer, M. J. Khan, R. A. Khan, U. Shahbaz Khan, "LASSO homotopy-based sparse representation classification for fNIRS-BCI," Sensors, vol. 22 no. 7,DOI: 10.3390/s22072575, 2022.
[40] M. Huang, X. Zhang, X. Chen, Y. Mai, X. Wu, J. Zhao, Q. Feng, "Joint-channel-connectivity-based feature selection and classification on fNIRS for stress detection in decision-making," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 1858-1869, DOI: 10.1109/tnsre.2022.3188560, 2022.
[41] N. K. Qureshi, N. Naseer, F. M. Noori, H. Nazeer, R. A. Khan, S. Saleem, "Enhancing classification performance of functional near-infrared spectroscopy-brain–computer interface using adaptive estimation of general linear model coefficients," Frontiers in Neurorobotics, vol. 11,DOI: 10.3389/fnbot.2017.00033, 2017.
[42] M. S. B. A. Ghaffar, U. S. Khan, J. Iqbal, N. Rashid, A. Hamza, W. S. Qureshi, M. I. Tiwana, U. Izhar, "Improving classification performance of four class FNIRS-BCI using Mel Frequency Cepstral Coefficients (MFCC)," Infrared Physics and Technology, vol. 112,DOI: 10.1016/j.infrared.2020.103589, 2021.
[43] P. C. Petrantonakis, I. Kompatsiaris, "Single-trial NIRS data classification for brain–computer interfaces using graph signal processing," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26 no. 9, pp. 1700-1709, DOI: 10.1109/tnsre.2018.2860629, 2018.
[44] K. Paulmurugan, V. Vijayaragavan, S. Ghosh, P. Padmanabhan, B. Gulyás, "Brain–computer interfacing using functional near-infrared spectroscopy (fNIRS)," Biosensors, vol. 11 no. 10,DOI: 10.3390/bios11100389, 2021.
[45] R. Li, D. Yang, F. Fang, K.-S. Hong, A. L. Reiss, Y. Zhang, "Concurrent fNIRS and EEG for brain function investigation: a systematic, methodology-focused review," Sensors, vol. 22 no. 15,DOI: 10.3390/s22155865, 2022.
[46] Z. Li, S. Zhang, J. Pan, "Advances in hybrid brain-computer interfaces: principles, design, and applications," Computational Intelligence and Neuroscience, vol. 2019,DOI: 10.1155/2019/3807670, 2019.
[47] T. Gateau, G. Durantin, F. Lancelot, S. Scannella, F. Dehais, "Real-time state estimation in a flight simulator using fNIRS," PLoS One, vol. 10 no. 3,DOI: 10.1371/journal.pone.0121279, 2015.
[48] N. Naseer, F. M. Noori, N. K. Qureshi, K. S. Hong, "Determining optimal feature-combination for LDA classification of functional near-infrared spectroscopy signals in brain-computer interface application," Frontiers in Human Neuroscience, vol. 10,DOI: 10.3389/fnhum.2016.00237, 2016.
[49] F. M. Noori, N. Naseer, N. K. Qureshi, H. Nazeer, R. A. Khan, "Optimal feature selection from fNIRS signals using genetic algorithms for BCI," Neuroscience Letters, vol. 647, pp. 61-66, DOI: 10.1016/j.neulet.2017.03.013, 2017.
[50] E. A. Aydin, "Subject-Specific feature selection for near infrared spectroscopy based brain-computer interfaces," Computer Methods and Programs in Biomedicine, vol. 195,DOI: 10.1016/j.cmpb.2020.105535, 2020.
[51] C. Li, Y. Xu, L. He, Y. Zhu, S. Kuang, L. Sun, "Research on fNIRS recognition method of upper limb movement intention," Electronics, vol. 10 no. 11,DOI: 10.3390/electronics10111239, 2021.
[52] M. M. Esfahani, H. Sadati, "Cross-subject fNIRS signals channel-selection based on multi-objective NSGA-II algorithm," pp. 242-247, .
[53] H. Li, A. Gong, L. Zhao, W. Zhang, F. Wang, Y. Fu, "Decoding of walking imagery and idle state using sparse representation based on fNIRS," Computational Intelligence and Neuroscience, vol. 2021, pp. 2021-2110, DOI: 10.1155/2021/6614112, 2021.
[54] J. Shin, A. von Luhmann, B. Blankertz, D. W. Kim, J. Jeong, H. J. Hwang, K. R. Muller, "Open access dataset for EEG+NIRS single-trial classification," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25 no. 10, pp. 1735-1745, DOI: 10.1109/TNSRE.2016.2628057, 2017.
[55] U. Wolf, V. Toronov, J. H. Choi, R. Gupta, A. Michalos, E. Gratton, M. Wolf, "Correlation of functional and resting state connectivity of cerebral oxy-deoxy-and total hemoglobin concentration changes measured by near-infrared spectrophotometry," Journal of Biomedical Optics, vol. 16 no. 8,DOI: 10.1117/1.3615249, 2011.
[56] D. A. Spielman, "Spectral graph theory and its applications," Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), pp. 29-38, DOI: 10.1109/FOCS.2007.56, .
[57] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, "The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains," IEEE Signal Processing Magazine, vol. 30 no. 3, pp. 83-98, DOI: 10.1109/MSP.2012.2235192, 2013.
[58] W. Erb, "Shapes of uncertainty in spectral graph theory," IEEE Transactions on Information Theory, vol. 67 no. 2, pp. 1291-1307, DOI: 10.1109/TIT.2020.3039310, 2021.
[59] M. Xu, P. Fu, B. Liu, J. Li, "Multi-stream attention-aware graph convolution network for video salient object detection," IEEE Transactions on Image Processing, vol. 30, pp. 4183-4197, DOI: 10.1109/TIP.2021.3070200, 2021.
[60] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P. S. Yu, "A comprehensive survey on graph neural networks," IEEE Transactions on Neural Networks and Learning Systems, vol. 32 no. 1,DOI: 10.1109/TNNLS.2020.2978386, 2021.
[61] T. N. Kipf, M. Welling, "Semi-supervised classification with graph convolutional networks," 2016. https://arxiv.org/abs/1609.02907
[62] V. Gonuguntla, Y. Wang, K. C. Veluvolu, "Event-related functional network identification: application to EEG classification," IEEE Journal of Selected Topics in Signal Processing, vol. 10 no. 7, pp. 1284-1294, DOI: 10.1109/JSTSP.2016.2602007, 2016.
[63] M. A. Yaqub, K. S. Hong, A. Zafar, C. S. Kim, "Control of transcranial direct current stimulation duration by assessing functional connectivity of near-infrared spectroscopy signals," International Journal of Neural Systems, vol. 32 no. 1,DOI: 10.1142/s0129065721500507, Jan 2022.
[64] W. Liang, J. Jin, I. Daly, H. Sun, X. Y. Wang, A. Cichocki, "Novel channel selection model based on graph convolutional network for motor imagery," Cognitive Neurodynamics,DOI: 10.1007/s11571-022-09892-1, 2022.
[65] R. Bapat, "On the adjacency matrix of a threshold graph," Linear Algebra and Its Applications, vol. 439 no. 10, pp. 3008-3015, DOI: 10.1016/j.laa.2013.08.007, 2013.
[66] S.-Y. Dong, J. Choi, Y. Park, S. Y. Baik, M. Jung, Y. Kim, S. H. Lee, "Prefrontal functional connectivity during the verbal fluency task in patients with major depressive disorder: a functional near-infrared spectroscopy study," Frontiers in Psychiatry, vol. 12,DOI: 10.3389/fpsyt.2021.659814, 2021.
[67] T. Behrouzi, D. Hatzinakos, "Understanding power of graph convolutional neural network on discriminating human EEG signal," Proceedings of the 2021 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE),DOI: 10.1109/Ccece53047.2021.9569129, .
[68] Y. Y. Wan, C. A. Yuan, M. M. Zhan, L. Chen, "Robust graph learning with graph convolutional network," Information Processing and Management, vol. 59 no. 3,DOI: 10.1016/j.ipm.2022.102916, May 2022.
[69] H.-J. Hwang, J.-H. Lim, D.-W. Kim, C.-H. Im, "Evaluation of various mental task combinations for near-infrared spectroscopy-based brain-computer interfaces," Journal of Biomedical Optics, vol. 19 no. 7,DOI: 10.1117/1.JBO.19.7.077005, 2014.
[70] M. Stangl, G. Bauernfeind, J. Kurzmann, R. Scherer, C. Neuper, "A haemodynamic brain–computer interface based on real-time classification of near infrared spectroscopy signals during motor imagery and mental arithmetic," Journal of Near Infrared Spectroscopy, vol. 21 no. 3, pp. 157-171, DOI: 10.1255/jnirs.1048, 2013.
[71] J. Shin, J. Jeong, "Multiclass classification of hemodynamic responses for performance improvement of functional near-infrared spectroscopy-based brain–computer interface," Journal of Biomedical Optics, vol. 19 no. 6,DOI: 10.1117/1.JBO.19.6.067009, 2014.
[72] H. Peng, F. Long, C. Ding, "Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27 no. 8, pp. 1226-1238, DOI: 10.1109/tpami.2005.159, 2005.
[73] I. Kononenko, "Estimating attributes: analysis and extensions of RELIEF," European Conference on Machine Learning, pp. 171-182, 1994.
[74] Y. Eroğlu, M. Yildirim, A. Çinar, "Convolutional Neural Networks based classification of breast ultrasonography images by hybrid method with respect to benign, malignant, and normal using mRMR," Computers in Biology and Medicine, vol. 133,DOI: 10.1016/j.compbiomed.2021.104407, 2021/06/01/2021.
[75] K. Kira, L. A. Rendell, "A practical approach to feature selection," Machine Learning Proceedings, pp. 249-256, 1992.
[76] M. Robnik-Šikonja, I. Kononenko, "Theoretical and empirical analysis of ReliefF and RReliefF," Machine Learning, vol. 53 no. 1/2, pp. 23-69, DOI: 10.1023/a:1025667309714, 2003.
[77] R. J. Urbanowicz, M. Meeker, W. La Cava, R. S. Olson, J. H. Moore, "Relief-based feature selection: introduction and review," Journal of Biomedical Informatics, vol. 85, pp. 189-203, DOI: 10.1016/j.jbi.2018.07.014, 2018/09/01/2018.
[78] Z. Wu, X. Wang, B. Jiang, "Fault diagnosis for wind turbines based on ReliefF and eXtreme gradient boosting," Applied Sciences, vol. 10 no. 9,DOI: 10.3390/app10093258, 2020.
[79] K. K. Ang, J. Yu, C. Guan, "Extracting and selecting discriminative features from high density NIRS-based BCI for numerical cognition," .
[80] E. Ergün, A. Ö, "Decoding of binary mental arithmetic based near-infrared spectroscopy signals," Proceedings of the 2018 3rd International Conference on Computer Science and Engineering (UBMK), pp. 201-204, DOI: 10.1109/UBMK.2018.8566462, .
[81] X. Jiang, X. Gu, K. Xu, H. Ren, W. Chen, "Independent decision path fusion for bimodal asynchronous brain–computer interface to discriminate multiclass mental states," IEEE Access, vol. 7, pp. 165303-165317, DOI: 10.1109/access.2019.2953535, 2019.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2023 Amad Zafar et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
In this study, a channel and feature selection methodology is devised for brain-computer interface (BCI) applications using functional near-infrared spectroscopy (fNIRS). A graph convolutional network (GCN) is employed to select the appropriate and correlated fNIRS channels. Furthermore, in the feature extraction phase, the performance of two filter-based feature selection algorithms, (i) the minimum redundancy maximum relevance (mRMR) and (ii) ReliefF, is investigated. The five most commonly used temporal statistical features (i.e., mean, slope, maximum, skewness, and kurtosis) are used, whereas the conventional support vector machine (SVM) is utilized as a classifier for training and testing. The proposed methodology is validated using an available online dataset of motor imagery (left- and right-hand), mental arithmetic, and baseline tasks. First, the efficacy of the proposed methodology is shown for two-class BCI applications (i.e., left- vs. right-hand motor imagery and mental arithmetic vs. baseline). Second, the proposed framework is applied to four-class BCI applications (i.e., left- vs. right-hand motor imagery vs. mental arithmetic vs. baseline). The results show that the number of appropriate channels and features was significantly reduced, resulting in a significant increase in classification accuracy for both two-class and four-class BCI applications, respectively. Furthermore, both mRMR (i.e., 87.8% for motor imagery, 87.1% for mental arithmetic, and 78.7% for four-class) and ReliefF (i.e., 90.7% for motor imagery, 93.7% for mental arithmetic, and 81.6% for four-class) yielded high average classification accuracy
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details







1 Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
2 Department of Robotics & Artificial Intelligence (R&AI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), H−12, Islamabad 44000, Pakistan
3 ICFO-Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, 08860 Cas-telldefels, Barcelona, Spain
4 Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
5 Department of Mathematics, College of Natural Sciences, Pusan National University, Busan 46241, Republic of Korea
6 Department of Applied Mathematics, Pukyong National University, Busan, Republic of Korea
7 Department of Scientific Computing, Pukyong National University, Busan, Republic of Korea; Interdisciplinary Biology Laboratory (iBLab), Division of Biological Science, Graduate School of Science, Nagoya University, Nagoya, Japan