1. Introduction
BCI is a developing technique that can determine brain activities and transform them into simulated outcomes such as supplementing, restoring, improving, replacing, or enhancing natural CNS outcomes [1]. In various BCI tasks, MI is a prevalent model and is usually described as imagining the movements of a body part without real motor implementation. It will be demonstrated for sharing the same mechanism as real motor execution and allow human focus to control a robotic arm, drone, and computer cursor [2]. It can support patients with motor disabilities’ interaction with their surroundings by controlling devices like wheelchairs, prostheses, and computer cursors. EEG-based BCI is a category of commonly employed BCI methods because of its non-invasive nature, which does not require some surgical technique [3]. Among the significant modules of these BCI systems is the classification of restricted and transient EEG modifications such as event-related synchronization (ERS) or event-related desynchronization (ERD) in diverse categories of MIs [4]. A wide-ranging EEG-based BCI model comprises four parts: pattern classification, signal preprocessing, EEG signal acquisition, and feature extraction [5]. In the primary measure, feature extraction is described as employing different frequency, time, or spatial domain processing methods for extracting the features, which will be executed for classification, and decreasing the dimensions of feature vectors to satisfy the real-time process requisites [6].
The existing MIs EEG-based BCI scheme mainly employs traditional signal processing or ML techniques for feature extractors and classification. Artificial intelligence (AI) is defined as the computers or systems that emulate human intelligence to carry out tasks and will (iteratively) enhance themselves depending on the data that was obtained [7]. AI will provide numerous forms comprising ML and DL. ML has been characterized as the category of AI that will automatically be utilized with only minimum intervention from humans. Alternatively, a DL is a subcategory of ML that learns with massive data by implementing more neural network (NN) layers than traditional ML methods. Numerous evaluations under EEG-based BCI employ ML and signal processing [8]. A significant benefit of DL is that it will be performed individually in feature engineering. In this technique, the data are examined by exploring related features, followed by integrating those features to enable faster learning without a few categorical instructions. The motivation behind improving MI classification in BCIs stems from the requirement to enhance non-invasive techniques for translating brain activity into actionable outcomes [9]. Effectual MI classification can crucially advance assistive technologies, enabling more intuitive control of robotic arms, drones, and wheelchairs. By refining these methods, the study aims to give individuals with motor impairments greater autonomy and interaction with their environment. Employing advanced optimization and DL methods to improve classification accuracy can result in more reliable and effectual BCI models, fostering improved incorporation of technology with human cognitive processes and enhancing the quality of life for users [10].
This article presents a Boosted Harris Hawks Shuffled Shepherd Optimization Augmented Deep Learning (BHHSHO-DL) technique based on Motor Imagery Classification for BCI. The BHHSHO-DL technique mainly exploits the hyperparameter-tuned DL approach for MI identification for BCI. Initially, the BHHSHO-DL technique performs data preprocessing utilizing the wavelet packet decomposition (WPD) model. Besides, the enhanced densely connected networks (DenseNet) model extracts the complex and hierarchical feature patterns from the preprocessed data. Meanwhile, the BHHSHO technique-based hyperparameter tuning process is accomplished to elect optimal parameter values of the enhanced DenseNet model. Finally, the classification procedure is implemented by utilizing the convolutional autoencoder (CAE) model. The simulation value of the BHHSHO-DL methodology is performed on a benchmark dataset. The key contribution of the BHHSHO-DL approach is listed below.
* The WPD technique refines data preprocessing by decomposing complex signals into simpler, more manageable components. This methodology crucially improves the quality of the input data, making it more appropriate for analysis by subsequent models. By enhancing data clarity and structure, WPD supports more accurate and effectual modelling.
* The improved DenseNet methodology outperforms extracting complex and hierarchical features from preprocessed data. Employing its DL architecture, it captures intrinsic patterns with high precision. This advanced feature extraction enhances the overall performance and accuracy of the subsequent evaluation.
* Systematic tuning of hyperparameters in the improved DenseNet method optimizes parameter settings for enhanced performance. This meticulous alteration improves the technique’s accuracy and data, data analysis, and interpretation efficiency. The refined parameters result in more reliable and precise outcomes.
* Employing a CAE for hyperparameter tuning presents a novel technique for improving DenseNet model performance. The CAE technique’s capability to learn effective data representations allows for efficient parameter adjustment. This novel technique refines the methodology’s accuracy and effectiveness by utilizing the merits of the CAE model in representation learning.
* Incorporating a CAE for hyperparameter tuning presents a novel methodology that improves the performance of the DenseNet model. By implementing the CAE model’s capability to learn and represent data effectively, this technique refines the tuning process and crucially enhances the model’s accuracy. This innovative strategy departs from conventional hyperparameter tuning techniques, setting a new standard for optimization.
2. Literature review
In [11], an innovative deep transfer NN (DTNN) method was developed. Primarily, a filter bank has been employed. Afterwards, developed two domain adaptation components concurrently. The primary domain adaptation component chose the same source field information, and the 2nd domain adaptation component decreased the variance among the target and source fields. In conclusion, two adversarial methods have been utilized to increase the precision and strength of the classification. In [12], a hybrid DL model for effectively exploring the EEG signal was developed. The method proficiently chose and utilized the convolutional NN (CNN) filters in the developed model to extract the significant multiple domain features. The redundant features could be mined using feature extraction to enhance the model’s effectiveness. In [13], an innovative modular and self-organized model was developed. A pattern recognition method was presented for transmitting the determined signals to make classes that signify considerations without prior preprocessing. A neuro-fuzzy component and a learning technique were used to analyze the model’s internal process. The whole learning method was dependent upon the ML technique. In [14], a multi-branch-CNN (MBCNN) method with a temporal convolution network (TCN), an end-wise DL method for decoding multiclass MI tasks was introduced. The technique initially employed the MBCNN method by employing diverse convolutional kernels. Next, the technique presented TCN for extracting more feature representations. The within-subject cross-session approach was implemented for validation.
Malibari et al. [15] developed an arithmetic optimizer with the RetinaNet-based DL method for MI classification (AORNDL-MIC) method under BCIs. The continuous wavelet transforms (CWT) and multiscale principal component analysis (MSPCA) techniques were employed. Besides, the DL-based RetinaNet could be utilized in the extraction process through the ID3 approach. Similarly, an arithmetical optimization algorithm (AOA) was applied to tune the hyperparameter of RetinaNet. In [16], an innovative technique employing artificial NN (ANN) architecture was proposed. Feature extraction methods are analyzed and compared. Four classification methods have been applied: LDA, KNN, and Quadratic Discriminant Analysis (QDA) developed ANN model. The study also included batch normalization levels in the presented ANN model to increase the learning accuracy and period of the NN. In [17], an effective CNN method for EEG-based MI classification was introduced. An automatic channel selection technique was developed to decrease the model difficulty, which relies on spatial filters and measures the activations and weights to 8-bit by slight precision loss. In [18], an EEG-Based temporal 1D-CNN (ETIODCNN) method was developed for categorizing MI. Primarily, the technique removed temporal relationships from EEG signals by presenting the core blocks. Next, the method employs the FC and global average pooling (GAP) layers to combine the temporal series features and achieve classification tasks. Sharma, Kim, and Gupta [19] compare conventional classification approaches with DL models, particularly Multi-Layered Perceptrons, for EEG MI tasks. It exhibits that SVM technique gives the quickest training and prediction speeds while maintaining comparable accuracy to conventional techniques.
In [20], the authors introduce a multiscale CNN (MS-CNN) technique that extracts key features from diverse EEG frequency bands for MI BCI classification. The model improves accuracy by incorporating user-specific features and data augmentation methods to enhance robustness. Roy [21] proposes a transfer learning (TL)-based multiscale feature fused CNN (MSFFCNN) technique for multiclass MI classification, capturing features from diverse EEG frequency bands. Kumari et al. [22] present a hybrid optimization technique incorporating War Strategy Optimization (WSO) and Chimp Optimization Algorithm (ChOA) models to improve classification performance. The two-tier DL method, comprising a CNN for temporal features and a modified Deep NN (M-DNN) technique for spatial characteristics, improves BCI control through optimal channel selection and advanced optimization. Xie and Oniga [23] propose an integrated time-frequency domain data enhancement methodology. The approach also presents a parallel CNN that processes raw EEG images and those transformed via the CWT technique. Echtioui et al. [24] introduce an ANN model to improve the classification performance of MI. Feature extraction methods, comprising time domain parameters and WPD, are also associated. Alsuradi et al. [25] present Shapley-informed augmentation to enhance within-subject accuracy, depending on data-driven evaluation that detected inconsistent temporal features across sessions for finger MI. Arı and Taçgın [26] developed the No-Filter EEG (NF-EEG) method, a robust CNN that classifies multiclass MI signals directly from raw data without preprocessing. The technique also employed input reshaping and utilized diverse data augmentation models.
The existing studies on MI classification techniques encounter various limitations. Models employing dual domain adaptation and adversarial approaches may need more computational complexity and scalability. Hybrid DL techniques utilizing convolutional filters may face problems with redundant features, affecting their efficiency. Models lacking preprocessing, namely modular and self-organized methods, may experience mitigated accuracy and robustness. Multi-branch CNNs with TCNs can be computationally intensive, while methodologies incorporating arithmetic optimization and DL mostly encounter threats with hyperparameter tuning. ANN methods with feature extraction may need to address overfitting or generalization adequately. Automatic channel selection in the CNN technique might result in precision loss, and temporal 1D-CNN techniques may need to be more accurate in their more accurate temporal relationships. Methods encompassing multiscale CNNs or TL may need help with dataset variability and generalization, and hybrid optimization approaches can have difficulties balancing exploration and exploitation. Time-frequency domain enhancement models may also encounter efficiency problems, and techniques, namely No-Filter EEG, may be limited by the need for more preprocessing, affecting their capability to handle noisy data effectively. Existing MI classification techniques encounter threats: computational complexity, limited generalization across various datasets, and insufficient handling of noisy or inconsistent data. Moreover, there is a requirement for more efficient feature extraction and data augmentation methods that can enhance accuracy and robustness while mitigating dependence on extensive preprocessing.
3. The proposed method
This study presents a unique BHHSHO-DL technique-based MI classification for BCI. The BHHSHO-DL technique mainly exploits a hyperparameter-tuned DL model for MI identification for BCI. To obtain this, the BHHSHO-DL method followed four main methods: WPD-based preprocessing, DenseNet-based feature extractor, BHHSHO-based hyper-parameter tuning, and CAE-based classification. Fig 1 illustrates the complete flow from the BHHSHO-DL model.
[Figure omitted. See PDF.]
3.1. Data preprocessing
Initially, the BHHSHO-DL technique performs data preprocessing using the WPD technique. The WPD model is a developed model dependent upon wavelet decomposition [27]. The WPD model is highly advantageous for preprocessing because it can effectively handle non-stationary signals. Unlike conventional Fourier-based techniques, WPD decomposes data into diverse frequency sub-bands with both time and frequency resolution, making it adept at capturing transient and localized features. This elaborated decomposition allows for precise noise reduction and artefact removal, specifically in complex signals like EEG, where diverse kinds of noise affect several frequency ranges. Furthermore, WPD’s multi-resolution analysis eases the extraction of relevant features while conserving significant signal characteristics. Related to other models, WPD provides a more refined and adaptable preprocessing technique, improving the data’s quality and accuracy for subsequent evaluation. Preprocessing with the WPD method effectually addresses noise and artefacts in EEG data by decomposing the signal into various frequency sub-bands.
This decomposition isolates and removes noise and artefacts from specific frequency ranges, enhancing the signal-to-noise ratio. By concentrating on the most relevant frequency components and reconstructing the signal from these clean sub-bands, WPD improves data quality while conserving critical data for subsequent analysis. This methodology confirms that the cleaned EEG data is more accurate and reliable for additional processing and interpretation. The capability of the WPD to manage non-stationary signals is crucial for EEG processing, as EEG data often exhibit rapid fluctuations and transient events that need precise evaluation. This capability confirms that significant features are captured while minimizing the impact of noise and artefacts. Moreover, multi-resolution analysis of the WPD model facilitates the superior decomposition of EEG signals compared to Independent Component Analysis (ICA), which may encounter difficulty ineffectually isolating transient noise, ultimately improving the clarity and reliability of the data. Fig 2 portrays the overall structure of the WPD model.
[Figure omitted. See PDF.]
This model includes the low-resolution of WPD maximum frequency. It examines the signals in a better way. WPD physically selects the proposer frequency range equivalent to the signal band for dissimilar signals, enhancing time-frequency resolution. WPD is a superior efficacy from the limited analysis. It eliminates the annoying data and upholds the feature database, i.e., it is beneficial to identify and better state the EEG signal database. Many resolution studies decay the L(R) of Hilbert space as to the orthogonal sum of wavelet sub-space Wl dependent upon the l scaling factor. A novel sub-space signifies the Vl scale space and Wl wavelet subspace.
(1)
The orthogonal decomposition of Hilbert space V1 ⊕ W1 is expressed by:(2)
Define the sub-space as an end function space um(t), such that u(t) achieves:(3)
In Eq (3), (k) = (−1)kh(1 − k), g(k), and h(k) represent the co-efficient of lower and higher pass filtering, which is orthogonal to another.
If m = 0, from Eq (3), the following results are obtained:(4)
During many resolution analysis processes, the function of wavelet basis ϕ(t) and scale ψ(t) achieve:(5)
From the formulation, given that ϕ(t) = u1(t) and (t) = u0(t). Thus {um(t)}m∈Z signify orthogonal wavelet packets. The expression of WPD co-efficient has been determined below:(6)
The WPD is a superior resolution of time-frequency in lower and higher-frequency. The EEG signals data is assumed to be the wavelet packet co-efficient on each decomposition scaling, which is various and considered a removing feature. The EEG is decayed by employing wavelet packets to attain wavelet coefficients on dissimilar measures. The re-formed EEG signals of , and are used as input for the classification of EEG.
3.2. DenseNet-based feature extraction
At this stage, the enhanced DenseNet model extracts the complex and hierarchical feature patterns from the preprocessed data [28]. The DenseNet-based feature extraction model is highly effective due to its unique architecture, which promotes efficient feature reuse and deep network training. DenseNet connects each layer to every other layer feed-forward, allowing for better gradient flow and reducing the vanishing gradient issue. This dense connectivity eases the extraction of complex and hierarchical features, enhancing the capability of the technique to capture intrinsic patterns in the data. Moreover, DenseNet’s compact architecture mitigates the number of parameters, which reduces computational cost and overfitting. Compared to other methods, DenseNet’s effectual feature propagation and reuse contribute to more accurate and robust feature extraction, particularly in complex and high-dimensional datasets. Fig 3 demonstrates the framework of DenseNet.
[Figure omitted. See PDF.]
DenseNet comprises a classification layer, three transition layers, four dense blocks, a 3x3 max pooling layer, and a 7x7 convolution layer with a stride of 2. Each dense block carries out 3x3 and 1x1 convolutions and follows the transition layer. The transition layer comprises a 2x2 average pooling layer and 1x1 convolutions. The last classification layer contains fully connected layers and a 7x7 GAP layer that uses an activation function like softmax. DenseNet is the establishment of dense connections ensuring the high data flow among layers. Unlike traditional CNN with L and L connections, DenseNet contains direct association with L layers. Assume the input image is x0, the CNN comprises five layers, xi(I = 0, 1, 2, ⋯ 5) denotes the mapping feature of the ith layer, Hi() indicates the nonlinear conversion, which may involve pooling, convolution, batch normalization, activation function, etc. Hence, the outcome of the 5th layer is a nonlinear conversion of resultant mapping features.
(7)
The attention module is a resource allocation method and is classified into pixel attention, multi-stage attention, channel attention, etc. This study used a squeeze and excitation (SE) block to learn the feature weights depending on the loss, such that the efficient feature maps have greater weight. The excitation function acquires the weighted to explicitly model the connection among feature channels and produces weight for all the feature channels via parameter. For any given conversion, Ftr mapping the input X(X ∈ RH′×W′×C′) into the feature maps U where U ∈ RH×W×C, a respective SE module is generated to perform feature recalibration. Firstly, the feature U is passed through the squeeze function Fsq that compresses U into 1 × 1 × C features. Then, the features from Fsq were excited using the excitation function Fex. Lastly, the recalibration feature is obtained through Fscale, where FScale implies that the weighted of excitation output is sequentially weight to the prior feature channel by multiplication and concludes the recalibration of novel features in the channel size.
3.3. Hyperparameter tuning process
In this work, the BHHSHO approach-based parameter tuning process occurs to elect optimal parameter values of the enhanced DenseNet model [29]. The BHHSHO method is appropriate for hyperparameter tuning due to its robust search capabilities and improved convergence properties. By incorporating the HHO with shuffled shepherd strategies, BHHSHO efficiently balances exploration and exploitation during the tuning process. This hybrid technique enhances the capability to escape local optima and attain a more global search, resulting in optimal parameter settings. The improved mechanism of the model additionally refines this process, accelerating convergence and enhancing the likelihood of finding greater hyperparameter values. Related to conventional models, the BHHSHO technique presents a more adaptive and effectual solution, making it highly efficient for intrinsic and high-dimensional optimization issues.
Hyperparameters such as learning rate and batch size play a significant role in the performance of the BHHSHO model. The learning rate determines how quickly the model adjusts to the error during training; an optimal rate facilitates faster convergence and reduces loss effectually. Meanwhile, batch size influences the stability of gradient updates and affects the capability of the technique to generalize. In this study, the learning rate emerged as the most impactful hyperparameter, substantially improving the method’s accuracy and reducing loss compared to other settings, accentuating the significance of careful hyperparameter tuning in attaining optimal performance metrics. Fig 4 illustrates the steps involved in the BHHSHO model.
[Figure omitted. See PDF.]
BHHSSO is a fusion of HHO and the SSOA. HHO’s primary objective is to mimic hawks’ normal hunting behaviour and prey movement to detect solutions for single-objective problems. According to the BHHSSO approach, the SSOA implemented the upgrade solution for HHO.
Exploration stage.
This phase describes the location of hawks’ when finding the prey. It relies on two strategies. The former explains how hawks find their prey according to the actual position (Si, I = 1, 2, …, N), N denotes the hawks’ number, which is depicted in Eq (8). The latter explains how hawks find their prey according to the placement on a random tree (Srand) given in Eq (9), where Si(t + 1) shows the location update of hawks in subsequent iteration t, Srand(t) specifies the present position of Hawks, r1, r2, r3, r4 and q is the random number that lies between zero and one. In Eq (10), Sm(t) specifies the mean position for the entire hawks. However, according to the BHHSSO method, the (Srand) computation relies on the best, worst, and existing location, as shown in Eq (10). Sprey(t) denotes the prey location.
(8)
Here, U = r3(lb + r4(ub − lb))(9)
Srand(t) is evaluated in Eq (10) according to the BHHSSO.
(10)
Shift from exploration to exploitation.
This stage of HHO attempts to model and define how hawks change their behaviour from the exploration to exploitation stages. Such behaviours are reliant on the escaping energy of prey (Eg), as shown below:(11)
(Eg) is computed in Eq (12) according to BHHSSO, whereas C indicates the chaotic randomly produced integer evaluated by the logistic map function. A polynomial map of degree 2, termed the logistic map in Eq (13), was often utilized as a typical example of how a nonlinear dynamical equation might produce complex, chaotic behaviour.
(12)(13)
Eg indicates the escape energy; Eg0 symbolizes the initial energy of prey; t shows the iteration count; and t* signifies the maximal iteration.
Exploitation stage.
The prey escape behaviour and hawk hunting strategy are two primary components of this stage. In the exploitation stage, four strategies were followed:
* Soft besiege
* Hard besiege
* Soft besiege with progressive quick dives
* Hard besiege with progressive quick dives.
Soft besiege.
A gentle besiege occurs when |Eg| ≥ 0.5 and r ≥ 0.5. The modelling of these behaviours is depicted in Eq (19), whereas ΔS(t) indicates the differences between the new location at t iteration and the location vector of the rabbit. The prey escaping strategy, Jp = 2(1 − r5), randomly varies at all the iterations. r5 shows the random number in [0, 1].
(14)(15)
Hard besiege.
If |Eg| ≺ 0.5 and r ≥ 0.5, this technique leads to 2 hard besieges. The location updating of hawks is given in Eq (16).
(16)
The SSOA performs the position update of hawks according to BHHSSO.
(17)
Soft besiege with progressive quick dives.
If |Eg| ≥ 0.5, the prey has enough energy for escaping, and r ≺ 0.5, then hawks are still constructing a soft besiege. This technique upgrades a hawk’s location. Team quick dives based on the levy’s flight are performed to increase the exploitation ability. R specifies the problem dimension, LF shows the levy’s flight function, Q implies the arbitrary vector with 1×R size, and V specifies the exploitation ability.
(18)(19)(20)
Where l and m are the random numbers between zero and one, γ specifies the constant number set as 1.5. Eq (21) determines the location update of hawks in soft besiege with progressive quick dives.
(21)
Hard besiege having progressive quick dives.
If r ≺ 0.5, the hawks constructed a hard besiege; if |Eg| ≺ 0.5, prey cannot escape. This strategy relies on the hard besiege provided in Eq (22).
(22)
Fitness choice is a significant factor in managing the efficiency of the BHHSHO approach. The choice of hyperparameter contains the solution of the encoded system to measure the effectiveness of candidate outcomes. During this case, the BHHSHO methodology assumes accuracy as the primary condition to plan the FF, which is defined as:(23)(24)
FP and TP define false and true positive values.
Convergence curve for BHHSHO optimization.
The convergence curves for the BHHSHO-DL technique are represented, demonstrating the effects of varying batch sizes, learning rates, and the number of layers. These curves emphasize how each parameter influences the model’s convergence behaviour, accentuating differences in speed and stability. The optimal configurations that result in efficient training and enhanced performance can be detected by analyzing these discrepancies. The training loss convergence curves for diverse batch sizes such as 8, 32, and 64 are depicted over a range of epochs in Fig 5. As illustrated, the loss decreases more rapidly with batch sizes of 32 and 64 compared to a batch size of 8, suggesting that larger batch sizes facilitate more stable training and faster convergence. This trend underscores the impact of batch size on the model’s learning efficiency throughout the training process.
[Figure omitted. See PDF.]
The training loss convergence curves for varying learning rates, namely 0.0001, 0.01, and 0.1, are presented over a series of epochs in Fig 6. The graph shows that a learning rate of 0.01 results in the most efficient loss reduction, while the lowest learning rate (0.0001) exhibits slower convergence. On the contrary, a learning rate of 0.1 depicts initial instability, representing that finding an optimal learning rate is significant for attaining effectual and stable training performance.
[Figure omitted. See PDF.]
The training loss convergence curves for models with different numbers of layers, namely 72, 96, and 121, are portrayed throughout several epochs in Fig 7. The graph depicts that the technique with 96 layers attains the most balanced and effectual reduction in loss, while the model with 121 layers experiences diminishing returns and slower convergence. On the contrary, the 72-layer model illustrates a relatively quick decrease in loss but may lack the depth required for capturing intrinsic patterns. This emphasizes optimizing the number of layers to balance performance and training effectualness.
[Figure omitted. See PDF.]
3.4. CAE-based classification process
Eventually, the classification procedure is implemented by utilizing the CAE model [30]. The CAE method is advantageous for classification tasks because it can learn effectual, hierarchical feature representations from raw data. By encoding input data into a compressed latent space and decoding it, CAEs capture crucial patterns and reduce dimensionality, improving the quality of features given in classification models. This methodology enhances feature extraction and reduces noise and redundancy, providing more precise and robust classification outcomes. Furthermore, CAEs are effectual in handling complex, high-dimensional data, making them superior to conventional techniques that might face difficulty with such threats. Their unsupervised and representation learning capacity gives them a crucial edge in attaining improved performance and generalization in classification tasks.
This model also outperforms dimensionality reduction and hierarchical feature learning, making it specifically effectual for EEG-based motor imagery (MI) classification. By capturing spatial and temporal patterns within the data, CAEs can automatically learn relevant features at various levels of abstraction, improving the accuracy of the classification. On the contrary, standard classifiers such as SVM depend heavily on predefined features, which may only partially capture the intrinsic relationships in EEG signals, resulting in suboptimal performance. This adaptive learning capability of CAEs allows for more robust and complex representations of the EEG data. Fig 8 specifies the structure of the CAE method.
[Figure omitted. See PDF.]
CAE combines the local convolution connection and the AE, adding a reconstructed convolutional input. The output value is reconstructed via the inverse convolution process called the convolution encoder. Next, the conversion process of convolution from the input mapping feature to the output is known as the convolution decoder. Furthermore, the parameters of encoding and decoding operations are computed through the typical AE unsupervised greedy training. The CAE operation lay, where f′(∙) and f(∙) are the convolutional decoder and encoder operations. Input feature map x ∈ Rn×l×l is attained from the input layer or prior layer. The dimension of the feature map is l × l pixels, and it encompasses n feature maps. The CAE operation consists of m convolution kernels and the resultant layer output m mapping feature. n denotes the number of input channels once the input mapping feature is generated in the input layer. n indicates the number of outcome mapping features from the prior layer when the input mapping features are from the prior layer. The convolution kernel size is d×d, whereas b ∈ Rm and W = {wj, j = 1, 2, ⋯, m} characterize the parameters of the convolutional encoder, d ≤ l. signifies the parameter of CAE layer to be learned, wj ∈ Rn×l×l is represented as a vector . And and are the parameters of the convolution decoder, where , . Firstly, the input images are encoded so that every time a d × d pixel patches xi, I = 1, 2, ⋯, p, is chosen from the input images, and later, the weighted wj of the jth convolutional kernel can be applied for convolution computation. Lastly, the neuron value oij, j = 1, 2, ⋯, m is computed from the output layer.
(25)
Where σ is a nonlinear activation function,(26)
Then, the oij output from the convolution decoder is encoded, and the xi is recreated through the oij for the .
(27)
After each convolutional encoder and decoder, is generated. P patch is attained from the reconstructed function with d×d. The MSE is used among the reconstructed patch of image (i = 1, 2, ⋯ p) and the original input image xi (i = 1, 2, ⋯ p) as the cost function. Moreover, the reconstruction error is defined in Eq (29), and the cost function is defined in Eq (28).
(28)(29)
Stochastic gradient descent (SGD) can be deployed to reduce the weights and errors, enhancing the CAE layer. Lastly, the trained parameter outputs the feature map, which is transferred to the following layer. Fig 9 illustrates the flowchart of the CAE model.
[Figure omitted. See PDF.]
4. Performance validation
This section investigates the performance of the BHHSHO-DL methodology on the BCI Competition (BCIC)-III database [31] and BCIC-IV database. The chosen databases are preferred for MI classification due to their high-quality, diverse EEG recordings and well-defined experimental protocols. These datasets present extensive and varied MI tasks, making them ideal for developing and benchmarking robust classification models. Their established utilization in the BCI community confirms reliability and comparability with existing research.
Table 1 and Fig 10 portray the classification outcomes offered by the BHHSHO-DL technique on the applied BCIC-III database. These accomplished values denote that the BHHSHO-DL method offers enhanced performance under all iterations. It is noticed that the BHHSHO-DL method obtains an average precn of 97.70%, recal of 98.56%, accuy of 98.15%, and Fscore of 98.17%.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Table 2 and Fig 11 provide a detailed comparison study of the BHHSHO-DL technique using the BCIC-III database [32–34]. These experiments show that the Adaptive PP-Bayesian, STFT-DL, optimized GA FKNN-LDA, and WTSE-SVM models have reached ineffectual outcomes. The CWTFB-TL, AORNDL-MIC, JFOFL-MICBCI, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Bayesian Optimization (BO), SVM, and Decision Tree (DT) techniques have also gained adjacent performance. However, the BHHSHO-DL approach reaches optimal performance with greater performance and an accuy of 98.15%.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
The effectiveness of the BHHSHO-DL approach with the BCIC-III database is presented in Fig 12 in the method of training accuracy (TRAA) and validation accuracy (VALA) outcomes. This outcome displays a valuable analysis of the behaviour from the BHHSHO-DL approach in numerous epochs, indicating its learning method and generalizability. Mainly, the outcome infers a consistent enhancement from the TRAA and VALA with enhanced epoch counts. It guarantees the adaptive nature of the BHHSHO-DL methodology in pattern detection development on both datasets. The increased tendency in VALA outlines the capability of the BHHSHO-DL methodology to change to the TRA data and excel in offering a precise classifier of unobserved information, pointing out strong generalizability.
[Figure omitted. See PDF.]
Fig 13 demonstrates the training loss (TRLA) and validation loss (VALL) results of the BHHSHO-DL technique with the BCIC-III dataset over different epoch counts. The advanced minimum in TRLA highlights the BHHSHO-DL technique, enhancing the weights and decreasing the classification error on both datasets. The outcome denotes a well-defined data set of the BHHSHO-DL approach associated with the TRA data, underlining its ability to capture patterns. The BHHSHO-DL approach constantly increases its parameters to reduce the variances between the real and predictive TRA classes.
[Figure omitted. See PDF.]
A detailed training accuracy outcome is offered by the BHHSHO-DL method with the BCIC-IV database and illustrated in Table 3 and Fig 14. These stimulation outcomes indicate that the BHHSHO-DL technique provides increased performance at several runs. It is perceived that the BHHSHO-DL technique gains average S-1 of 83.05%, S-2 of 88.99%, S-3 of 86.35%, S-4 of 93.41%, S-5 of 87.59%, S-6 of 86.55%, S-7 of 88.78%, S-8 of 90.82%, and S-9 of 89.39%, respectively.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
An extensive comparative outcome of the BHHSHO-DL method with the BCIC-IV database is described in Table 4 and Fig 15. The performance outcomes display that the CSP and FBCSP MIRSR techniques have obtained poorer outcomes. Similarly, the FDBN, AORNDL-MIC, and JFOFL-MICBCI methods have obtained remarkable performance. Likewise, GA, PSO, BO, SVM and DT models attained slightly higher values. The BHHSHO method illustrates a notable merit in balancing exploration and exploitation compared to conventional GA and PSO. This capability is significant for effectually navigating complex search spaces and finding optimal solutions, as shown by its superior convergence properties emphasized in Figs 5–7. The BHHSHO’s refined approach to optimization allows it to adapt more dynamically to the data landscape, resulting in improved performance metrics across diverse datasets comprising the BCIC-IV and BCIC-III. Also, the BHHSHO-DL technique presents superior performance on higher accuy values with S-1 of 92.16%, S-2 of 91.86%, S-3 of 89.61%, S-4 of 96.47%, S-5 of 90.77%, S-6 of 90.10%, S-7 of 92.30%, S-8 of 94.14%, and S-9 of 92.62%, correspondingly.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
The effectiveness of the BHHSHO-DL model BCIC-IV database is portrayed in Fig 16 in the procedure of TRAA and VALA curves. The outcome shows a beneficial explanation of the behaviour of the BHHSHO-DL technique at unpredictable epochs, demonstrating its learning development and generalizability. Noticeably, the outcome gathers a continuous enhancement in the TRAA and VALA with development in epoch counts. It provides the adaptive nature of the BHHSHO-DL technique in the pattern detection method with two data. The improved tendency in VALA outlines the ability of the BHHSHO-DL technique to modify the TRA data and excel in presenting a precise classifier of hidden data, showing strong generalizability.
[Figure omitted. See PDF.]
Fig 17 displays a complete view of the TRLA and VALL outcomes of the BHHSHO-DL method with the BCIC-IV dataset across different epoch counts. The gradual reduction in TRLA emphasizes the BHHSHO-DL method, optimizing the weights and minimizing the classification error on both data. The outcome exposes evident data of the BHHSHO-DL method relevant to the TRA data, underlining its ability to capture patterns from both databases. The BHHSHO-DL approach constantly increases its parameters to reduce the variances among the predictive and real TRA classes. Hence, the BHHSHO-DL approach is used to enhance the MI classification process.
[Figure omitted. See PDF.]
5. Conclusion
This article presents a unique BHHSHO-DL technique-based MI classification for BCI. The BHHSHO-DL technique mainly exploits the hyperparameter-tuned DL approach for identifying MI for BCI. To obtain this, the BHHSHO-DL method followed four main methods: WPD-based preprocessing, DenseNet-based feature extractor, BHHSHO-based parameter tuning, and CAE-based classification. At this stage, the enhanced DenseNet technique extracts the complex and hierarchical feature patterns from the data preprocessing. Moreover, the improved DenseNet model extracts the complex and hierarchical feature patterns from the data preprocessing. Meanwhile, the BHHSHO technique-based hyperparameter tuning process is performed to elect optimal parameter values of the enhanced DenseNet approach. Finally, the classification model is implemented using the CAE model. The experimental evaluation of the BHHSHO-DL technique is accomplished on a benchmark database. The performance validation of the BHHSHO-DL methodology portrayed a superior accuracy value of 98.15% and 92.23% over other techniques under BCIC-III and BCIC-IV datasets. The existing methods for MI classification face limitations encompassing high computational complexity, difficulties in generalizing across various datasets, and threats with feature redundancy and overfitting. Some methods need more efficient preprocessing, affecting accuracy and robustness, while others need help with precision loss or dataset variability. Future research must develop more effectual approaches that balance computational demands with accuracy, enhance generalization through advanced feature extraction and data augmentation methods, and improve preprocessing methodologies to handle noisy data more effectively. Moreover, exploring novel hybrid optimization models and TL methods could address existing threats and enhance the model’s overall performance.
Acknowledgments
This work was funded by the University of Jeddah, Jeddah, Saudi Arabia. Therefore, the authors thank the University of Jeddah for its technical and financial support.
References
1. 1. Mohammadi E.; Daneshmand P.G. and Khorzooghi S.M.S.M. Electroencephalography-based brain–Computer interface motor imagery classification. Journal of Medical Signals and Sensors, 2022, 12(1), p.40. pmid:35265464
* View Article
* PubMed/NCBI
* Google Scholar
2. 2. Sun B.; Wu Z.; Hu Y. and Li T. Golden subject is everyone: A subject transfer neural network for motor imagery-based brain computer interfaces. Neural Networks, 2022, 151, pp.111–120. pmid:35405471
* View Article
* PubMed/NCBI
* Google Scholar
3. 3. Khanam T.; Siuly S. and Wang H. An optimized artificial intelligence based technique for identifying motor imagery from EEGs for advanced brain computer interface technology. Neural Computing and Applications, 2023, 35(9), pp.6623–6634.
* View Article
* Google Scholar
4. 4. Ma J.; Yang B.; Qiu W.; Li Y.; Gao S. and Xia X. A large EEG dataset for studying cross-session variability in motor imagery brain-computer interface. Scientific Data, 2022, 9(1), p.531. pmid:36050394
* View Article
* PubMed/NCBI
* Google Scholar
5. 5. Narayanan V., Nithya P. and Sathya M., 2023. Effective lung cancer detection using deep learning network. Journal of Cognitive Human-Computer Interaction, (2), pp.15–5.
* View Article
* Google Scholar
6. 6. Wang, X., Yang, R., Huang, M., Yang, Z. and Wan, Z., 2021, March. A hybrid transfer learning approach for motor imagery classification in brain-computer interface. In 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech) (pp. 496–500). IEEE.
7. 7. Tong J.; Xing Z.; Wei X.; Yue C.; Dong E.; Du S.; et al. Towards Improving Motor Imagery Brain–Computer Interface Using Multimodal Speech Imagery. Journal of Medical and Biological Engineering, 2023, 1–11.
* View Article
* Google Scholar
8. 8. Arpaia, P.; Esposito, A.; Moccaldi, N.; Natalizio, A. and Parvis, M. Online processing for motor imagery-based brain-computer interfaces relying on EEG. In 2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) 2023, 01–06.
9. 9. Sadiq M.T.; Yu X.; Yuan Z.; Aziz M.Z.; Siuly S. and Ding W. Toward the development of versatile brain–computer interfaces. IEEE Transactions on Artificial Intelligence, 2021, 2(4), 314–328.
* View Article
* Google Scholar
10. 10. Dumitrescu C.; Costea I.-M.; Semenescu A. Using Brain-Computer Interface to Control a Virtual Drone Using Non-Invasive Motor Imagery and Machine Learning. Applied Sciences, 2021, 11, 11876.
* View Article
* Google Scholar
11. 11. Zheng M. and Lin Y., 2024. A deep transfer learning network with two classifiers based on sample selection for motor imagery brain-computer interface. Biomedical Signal Processing and Control, 89, p.105786.
* View Article
* Google Scholar
12. 12. Medhi K., Hoque N., Dutta S.K. and Hussain M.I., 2022. An efficient EEG signal classification technique for Brain–Computer Interface using hybrid Deep Learning. Biomedical Signal Processing and Control, 78, p.104005.
* View Article
* Google Scholar
13. 13. Cano-Izquierdo J.M., Ibarrola J. and Almonacid M., 2023. Applying deep learning in brain computer interface to classify motor imagery. Journal of Intelligent & Fuzzy Systems, (Preprint), pp.1–14.
* View Article
* Google Scholar
14. 14. Yu S., Wang Z., Wang F., Chen K., Yao D., Xu P., et al. 2024. Multiclass classification of motor imagery tasks based on multi-branch convolutional neural network and temporal convolutional network model. Cerebral Cortex, 34(2), p.bhad511. pmid:38183186
* View Article
* PubMed/NCBI
* Google Scholar
15. 15. Malibari A.A., Al-Wesabi F.N., Obayya M., Alkhonaini M.A., Hamza M.A., Motwakel A., et al. 2022. Arithmetic optimization with retinanet model for motor imagery classification on brain computer interface. Journal of healthcare engineering, 2022. pmid:35368960
* View Article
* PubMed/NCBI
* Google Scholar
16. 16. Echtioui A., Zouch W., Ghorbel M., Mhiri C. and Hamam H., 2023. Classification of BCI Multiclass Motor Imagery Task Based on Artificial Neural Network. Clinical EEG and Neuroscience, p.15500594221148285. pmid:36604821
* View Article
* PubMed/NCBI
* Google Scholar
17. 17. Wang X., Hersche M., Magno M. and Benini L., 2024. MI-BMInet: An efficient convolutional neural network for motor imagery brain–Machine interfaces with EEG channel selection. IEEE Sensors Journal.
* View Article
* Google Scholar
18. 18. Chu C., Xiao Q., Chang L., Shen J., Zhang N., Du Y., et al. 2023. EEG temporal information-based 1-D convolutional neural network for motor imagery classification. Multimedia Tools and Applications, 82(29), pp.45747–45767.
* View Article
* Google Scholar
19. 19. Sharma R., Kim M. and Gupta A., 2022. Motor imagery classification in brain-machine interface with machine learning algorithms: Classical approach to multi-layer perceptron model. Biomedical Signal Processing and Control, 71, p.103101.
* View Article
* Google Scholar
20. 20. Roy A.M., 2022. An efficient multiscale CNN model with intrinsic feature integration for motor imagery EEG subject classification in brain-machine interfaces. Biomedical Signal Processing and Control, 74, p.103496.
* View Article
* Google Scholar
21. 21. Roy A.M., 2022. Adaptive transfer learning-based multiscale feature fused deep convolutional neural network for EEG MI multiclassification in brain–computer interface. Engineering Applications of Artificial Intelligence, 116, p.105347.
* View Article
* Google Scholar
22. 22. Kumari A., Edla D.R., Reddy R.R., Jannu S., Vidyarthi A., Alkhayyat A. et al. 2024. EEG-based motor imagery channel selection and classification using hybrid optimization and two-tier deep learning. Journal of Neuroscience Methods, 409, p.110215. pmid:38968976
* View Article
* PubMed/NCBI
* Google Scholar
23. 23. Xie Y. and Oniga S., 2023. Classification of motor imagery EEG signals based on data augmentation and convolutional neural networks. Sensors, 23(4), p.1932. pmid:36850530
* View Article
* PubMed/NCBI
* Google Scholar
24. 24. Echtioui A., Zouch W., Ghorbel M., Mhiri C. and Hamam H., 2024. Classification of BCI multiclass motor imagery task based on artificial neural network. Clinical EEG and Neuroscience, 55(4), pp.455–464. pmid:36604821
* View Article
* PubMed/NCBI
* Google Scholar
25. 25. Alsuradi H., Khattak A., Fakhry A. and Eid M., 2024. Individual-finger motor imagery classification: a data-driven approach with Shapley-informed augmentation. Journal of Neural Engineering, 21(2), p.026013. pmid:38479013
* View Article
* PubMed/NCBI
* Google Scholar
26. 26. Arı E. and Taçgın E., 2024. NF-EEG: A generalized CNN model for multi class EEG motor imagery classification without signal preprocessing for brain computer interfaces. Biomedical Signal Processing and Control, 92, p.106081.
* View Article
* Google Scholar
27. 27. Huang J.S., Liu W.S., Yao B., Wang Z.X., Chen S.F. and Sun W.F., 2021. Electroencephalogram-Based Motor Imagery Classification Using Deep Residual Convolutional Networks. Frontiers in Neuroscience, 15. pmid:34867174
* View Article
* PubMed/NCBI
* Google Scholar
28. 28. Wang K., Jiang P., Meng J. and Jiang X., 2022. Attention-based DenseNet for pneumonia classification. IRBM, 43(5), pp.479–485.
* View Article
* Google Scholar
29. 29. Stateczny A., Praveena H.D., Krishnappa R.H., Chythanya K.R. and Babysarojam B.B., 2023. Optimized Deep Learning Model for Flood Detection Using Satellite Images. Remote Sensing, 15(20), p.5037.
* View Article
* Google Scholar
30. 30. Chen M., Shi X., Zhang Y., Wu D. and Guizani M., 2017. Deep feature learning for medical image analysis with convolutional autoencoder neural network. IEEE Transactions on Big Data, 7(4), pp.750–758.
* View Article
* Google Scholar
31. 31. Lemm S., Schafer C. and Curio G., 2004. BCI competition 2003-data set III: probabilistic modeling of sensorimotor/spl mu/rhythms for classification of imaginary hand movements. IEEE Transactions on Biomedical Engineering, 51(6), pp.1077–1080.
* View Article
* Google Scholar
32. 32. Yang E., Shankar K., Perumal E. and Seo C., 2023. Optimal Fuzzy Logic Enabled EEG Motor Imagery Classification for Brain Computer Interface. IEEE Access.
* View Article
* Google Scholar
33. 33. Lin R., Dong C., Zhou P., Ma P., Ma S., Chen X., et al. 2024. Motor imagery EEG task recognition using a nonlinear Granger causality feature extraction and an improved Salp swarm feature selection. Biomedical Signal Processing and Control, 88, p.105626.
* View Article
* Google Scholar
34. 34. Ganesh S., Kannadhasan S. and Jayachandran A., 2024. Multi class robust brain tumor with hybrid classification using DTA algorithm. Heliyon, 10(1).
* View Article
* Google Scholar
Citation: Assiri FY, Ragab M (2024) Boosted Harris Hawks Shuffled Shepherd Optimization Augmented Deep Learning based motor imagery classification for brain computer interface. PLoS ONE 19(11): e0313261. https://doi.org/10.1371/journal.pone.0313261
About the Authors:
Fatmah Yousef Assiri
Roles: Conceptualization, Data curation, Investigation, Methodology, Project administration, Resources, Software, Supervision, Writing – original draft
Affiliation: Software Engineering Department, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
Mahmoud Ragab
Roles: Data curation, Formal analysis, Funding acquisition, Project administration, Resources, Validation, Visualization, Writing – review & editing
E-mail: [email protected]
Affiliation: Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
ORICD: https://orcid.org/0000-0002-4427-0016
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Mohammadi E.; Daneshmand P.G. and Khorzooghi S.M.S.M. Electroencephalography-based brain–Computer interface motor imagery classification. Journal of Medical Signals and Sensors, 2022, 12(1), p.40. pmid:35265464
2. Sun B.; Wu Z.; Hu Y. and Li T. Golden subject is everyone: A subject transfer neural network for motor imagery-based brain computer interfaces. Neural Networks, 2022, 151, pp.111–120. pmid:35405471
3. Khanam T.; Siuly S. and Wang H. An optimized artificial intelligence based technique for identifying motor imagery from EEGs for advanced brain computer interface technology. Neural Computing and Applications, 2023, 35(9), pp.6623–6634.
4. Ma J.; Yang B.; Qiu W.; Li Y.; Gao S. and Xia X. A large EEG dataset for studying cross-session variability in motor imagery brain-computer interface. Scientific Data, 2022, 9(1), p.531. pmid:36050394
5. Narayanan V., Nithya P. and Sathya M., 2023. Effective lung cancer detection using deep learning network. Journal of Cognitive Human-Computer Interaction, (2), pp.15–5.
6. Wang, X., Yang, R., Huang, M., Yang, Z. and Wan, Z., 2021, March. A hybrid transfer learning approach for motor imagery classification in brain-computer interface. In 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech) (pp. 496–500). IEEE.
7. Tong J.; Xing Z.; Wei X.; Yue C.; Dong E.; Du S.; et al. Towards Improving Motor Imagery Brain–Computer Interface Using Multimodal Speech Imagery. Journal of Medical and Biological Engineering, 2023, 1–11.
8. Arpaia, P.; Esposito, A.; Moccaldi, N.; Natalizio, A. and Parvis, M. Online processing for motor imagery-based brain-computer interfaces relying on EEG. In 2023 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) 2023, 01–06.
9. Sadiq M.T.; Yu X.; Yuan Z.; Aziz M.Z.; Siuly S. and Ding W. Toward the development of versatile brain–computer interfaces. IEEE Transactions on Artificial Intelligence, 2021, 2(4), 314–328.
10. Dumitrescu C.; Costea I.-M.; Semenescu A. Using Brain-Computer Interface to Control a Virtual Drone Using Non-Invasive Motor Imagery and Machine Learning. Applied Sciences, 2021, 11, 11876.
11. Zheng M. and Lin Y., 2024. A deep transfer learning network with two classifiers based on sample selection for motor imagery brain-computer interface. Biomedical Signal Processing and Control, 89, p.105786.
12. Medhi K., Hoque N., Dutta S.K. and Hussain M.I., 2022. An efficient EEG signal classification technique for Brain–Computer Interface using hybrid Deep Learning. Biomedical Signal Processing and Control, 78, p.104005.
13. Cano-Izquierdo J.M., Ibarrola J. and Almonacid M., 2023. Applying deep learning in brain computer interface to classify motor imagery. Journal of Intelligent & Fuzzy Systems, (Preprint), pp.1–14.
14. Yu S., Wang Z., Wang F., Chen K., Yao D., Xu P., et al. 2024. Multiclass classification of motor imagery tasks based on multi-branch convolutional neural network and temporal convolutional network model. Cerebral Cortex, 34(2), p.bhad511. pmid:38183186
15. Malibari A.A., Al-Wesabi F.N., Obayya M., Alkhonaini M.A., Hamza M.A., Motwakel A., et al. 2022. Arithmetic optimization with retinanet model for motor imagery classification on brain computer interface. Journal of healthcare engineering, 2022. pmid:35368960
16. Echtioui A., Zouch W., Ghorbel M., Mhiri C. and Hamam H., 2023. Classification of BCI Multiclass Motor Imagery Task Based on Artificial Neural Network. Clinical EEG and Neuroscience, p.15500594221148285. pmid:36604821
17. Wang X., Hersche M., Magno M. and Benini L., 2024. MI-BMInet: An efficient convolutional neural network for motor imagery brain–Machine interfaces with EEG channel selection. IEEE Sensors Journal.
18. Chu C., Xiao Q., Chang L., Shen J., Zhang N., Du Y., et al. 2023. EEG temporal information-based 1-D convolutional neural network for motor imagery classification. Multimedia Tools and Applications, 82(29), pp.45747–45767.
19. Sharma R., Kim M. and Gupta A., 2022. Motor imagery classification in brain-machine interface with machine learning algorithms: Classical approach to multi-layer perceptron model. Biomedical Signal Processing and Control, 71, p.103101.
20. Roy A.M., 2022. An efficient multiscale CNN model with intrinsic feature integration for motor imagery EEG subject classification in brain-machine interfaces. Biomedical Signal Processing and Control, 74, p.103496.
21. Roy A.M., 2022. Adaptive transfer learning-based multiscale feature fused deep convolutional neural network for EEG MI multiclassification in brain–computer interface. Engineering Applications of Artificial Intelligence, 116, p.105347.
22. Kumari A., Edla D.R., Reddy R.R., Jannu S., Vidyarthi A., Alkhayyat A. et al. 2024. EEG-based motor imagery channel selection and classification using hybrid optimization and two-tier deep learning. Journal of Neuroscience Methods, 409, p.110215. pmid:38968976
23. Xie Y. and Oniga S., 2023. Classification of motor imagery EEG signals based on data augmentation and convolutional neural networks. Sensors, 23(4), p.1932. pmid:36850530
24. Echtioui A., Zouch W., Ghorbel M., Mhiri C. and Hamam H., 2024. Classification of BCI multiclass motor imagery task based on artificial neural network. Clinical EEG and Neuroscience, 55(4), pp.455–464. pmid:36604821
25. Alsuradi H., Khattak A., Fakhry A. and Eid M., 2024. Individual-finger motor imagery classification: a data-driven approach with Shapley-informed augmentation. Journal of Neural Engineering, 21(2), p.026013. pmid:38479013
26. Arı E. and Taçgın E., 2024. NF-EEG: A generalized CNN model for multi class EEG motor imagery classification without signal preprocessing for brain computer interfaces. Biomedical Signal Processing and Control, 92, p.106081.
27. Huang J.S., Liu W.S., Yao B., Wang Z.X., Chen S.F. and Sun W.F., 2021. Electroencephalogram-Based Motor Imagery Classification Using Deep Residual Convolutional Networks. Frontiers in Neuroscience, 15. pmid:34867174
28. Wang K., Jiang P., Meng J. and Jiang X., 2022. Attention-based DenseNet for pneumonia classification. IRBM, 43(5), pp.479–485.
29. Stateczny A., Praveena H.D., Krishnappa R.H., Chythanya K.R. and Babysarojam B.B., 2023. Optimized Deep Learning Model for Flood Detection Using Satellite Images. Remote Sensing, 15(20), p.5037.
30. Chen M., Shi X., Zhang Y., Wu D. and Guizani M., 2017. Deep feature learning for medical image analysis with convolutional autoencoder neural network. IEEE Transactions on Big Data, 7(4), pp.750–758.
31. Lemm S., Schafer C. and Curio G., 2004. BCI competition 2003-data set III: probabilistic modeling of sensorimotor/spl mu/rhythms for classification of imaginary hand movements. IEEE Transactions on Biomedical Engineering, 51(6), pp.1077–1080.
32. Yang E., Shankar K., Perumal E. and Seo C., 2023. Optimal Fuzzy Logic Enabled EEG Motor Imagery Classification for Brain Computer Interface. IEEE Access.
33. Lin R., Dong C., Zhou P., Ma P., Ma S., Chen X., et al. 2024. Motor imagery EEG task recognition using a nonlinear Granger causality feature extraction and an improved Salp swarm feature selection. Biomedical Signal Processing and Control, 88, p.105626.
34. Ganesh S., Kannadhasan S. and Jayachandran A., 2024. Multi class robust brain tumor with hybrid classification using DTA algorithm. Heliyon, 10(1).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 Assiri, Ragab. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Motor imagery (MI) classification has been commonly employed in making brain-computer interfaces (BCI) to manage the outside tools as a substitute neural muscular path. Effectual MI classification in BCI improves communication and mobility for people with a breakdown or motor damage, delivering a bridge between the brain’s intentions and exterior actions. Employing electroencephalography (EEG) or aggressive neural recordings, machine learning (ML) methods are used to interpret patterns of brain action linked with motor image tasks. These models frequently depend upon models like support vector machine (SVM) or deep learning (DL) to distinguish among dissimilar MI classes, such as visualizing left or right limb actions. This procedure allows individuals, particularly those with motor disabilities, to utilize their opinions to command exterior devices like robotic limbs or computer borders. This article presents a Boosted Harris Hawks Shuffled Shepherd Optimization Augmented Deep Learning (BHHSHO-DL) technique based on Motor Imagery Classification for BCI. The BHHSHO-DL technique mainly exploits the hyperparameter-tuned DL approach for MI identification for BCI. Initially, the BHHSHO-DL technique performs data preprocessing utilizing the wavelet packet decomposition (WPD) model. Besides, the enhanced densely connected networks (DenseNet) model extracts the preprocessed data’s complex and hierarchical feature patterns. Meanwhile, the BHHSHO technique-based hyperparameter tuning process is accomplished to elect optimal parameter values of the enhanced DenseNet model. Finally, the classification procedure is implemented by utilizing the convolutional autoencoder (CAE) model. The simulation value of the BHHSHO-DL methodology is performed on a benchmark dataset. The performance validation of the BHHSHO-DL methodology portrayed a superior accuracy value of 98.15% and 92.23% over other techniques under BCIC-III and BCIC-IV datasets.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer