Content area
The study introduces a hybrid computational framework that combines neuro-inspired information processing using spiking neural networks (SNNs) and quantum information processing using quantum kernels to develop quantum-enhanced machine learning models for spatio-temporal data, demonstrated through the classification of EEG data as a case study. In the proposed SNN-quantum computation (SNN-QC) framework, SNN with spike time information representation is employed to learn spatio-temporal interactions (EEG recorded from multiple channels over time). Frequency-based (rate-based) information as spike frequency state vectors are extracted from the SNN and classified using a quantum classifier. In the latter part, we use the quantum kernel approach utilising feature maps for classification tasks. The proposed SNN-QC is demonstrated on a benchmark EEG dataset to classify three distinct wrist movement tasks in six binary classification setups as a proof of concept. We introduce a novel high-order nonlinear feature map that demonstrates improved performance over state-of-the-art feature maps and several machine learning methods across most of the tasks studied. Furthermore, the role of hyperparameters for enhanced feature maps is also highlighted. The performance of SNN-QC is evaluated using statistical metrics and cross-validation techniques, demonstrating its efficacy across multiple binary classifiers. Quantum hardware validation is conducted using both a superconducting IBM-QPU and a high-fidelity noisy simulation that replicates a real QPU. Furthermore, the results demonstrate that the SNN-QC outperforms models that use statistical features rather than features extracted from the SNN, as the SNN accounts for the temporal interaction between the spatio-temporal input variables. Finally, we conclude that the SNN-QC offers a potential pathway for developing more accurate neuromorphic-quantum enhanced systems that are both energy-efficient and biologically-inspired, well-suited for dealing with spatio-temporal data.
Introduction
Neuromorphic systems based on spiking neural networks (SNN) have established themselves as a leading computational paradigm, especially for learning spatio-temporal streaming data, to capture complex interaction between spatially located input variables over time [1]. Many computationally efficient architectures based on SNN have been developed, one of them being the brain-inspired SNN NeuCube architecture [2]. Although being efficient in terms of speed, accuracy, and low power consumption when processing temporal data, SNN systems require accurate classification methods for the final outputs. Research has been done in terms of using SNN for spatio-temporal feature extraction from spatio-temporal streaming data and then classifying the extracted feature vectors using machine learning (ML) methods, such as ESN (Eco-State Network) [3] or dynamic evolving neuro-fuzzy system [4]. This study introduces, for the first time, the use of quantum classifiers to achieve quantum advantages in the classification of feature vectors extracted from an SNN model. Quantum computational models have shown promising results for vector-based datasets related to specific problems, but not for spatio-temporal data modelling [5–8]. Our main hypothesis is that combining SNN and Quantum computational models will enhance the overall performance and the scope of applications of both computational paradigms.
Recognizing the advantages of both neuro-inspired information processing and quantum information processing for ML tasks, this study proposes an advanced hybrid network for designing a quantum-enhanced spiking neural network (SNN) model, illustrated on real-life datasets acquired from brain signals. The goal is to leverage the advanced specific information gathered from SNNs and utilize it further for quantum processing advantages. The dataset used in this study consists of complex and multidimensional electroencephalogram (EEG) input. The EEG data is inherently complex due to its high dimensionality and the intricate nature of spatio-temporal brain signals. EEG provides information on brain activity and has several crucial applications, including neuroscience, brain-computer interfaces (BCIs), medical diagnostics, neurofeedback, neurorehabilitation, EEG-driven biometric authentication, and so on [9–12]. Quantum machine learning (QML) may offer a promising paradigm for a more effective and accurate analysis of EEG datasets [13]. Therefore, our focus is on applying QML to solve classification problems in supervised learning tasks, particularly with real-world datasets that feature complex attributes.
Quantum algorithms for ML problems have been recognized as a fast-evolving domain at the intersection of quantum computing and artificial intelligence. The quantum framework leverages the principles of quantum mechanics, such as superposition, entanglement, and interference, inherited from the principles of quantum mechanics [14]. These principles help to perform computations that offer significant advantages over classical computers. The superposition of quantum bits (qubits) allows for concurrent quantum information processing, offering quantum parallelism [15]. These unique quantum characteristics help quantum algorithms achieve remarkable computational advantages [16, 17]. The integration of quantum computing with machine learning has garnered considerable interest, driving advancements in numerous applications and solidifying the practical value of quantum technology [18–21].
The motivation of QML comes from its potential for solving complex problems more efficiently, offering advantages in problems like optimization, classification, computer vision, pattern recognition, biological analysis, and many more [20–22]. Also, QMLs are increasingly seen as promising candidates for utilising the scalability of quantum computers [23]. The objective of QML is to utilize quantum information processing to advance computational approaches to gain certain advantages such as improved solutions to speed-up [21, 23]. Currently, quantum computing is being explored across various fields of computational science to realize and capitalize on quantum benefits [22, 23]. Early demonstrations of quantum advantage over classical algorithms were seen for achieving higher accuracy over synthetic datasets [5], recent developments have focused on exploring and identifying opportunities for quantum advantage in real-world datasets, including neuronal data classification [6, 7, 24]. These datasets often present intricate, high-dimensional features that pose significant challenges for classical ML algorithms.
The quantum hardware continues to improve, and the field is moving rapidly from theoretical to practical applications, exploring quantum-enhanced models for different domains of science, engineering, and finance [25]. However, the development of robust quantum algorithms that could outperform classical algorithms remains a crucial challenge, requiring ongoing research in quantum algorithm design, error correction, and hardware scalability [8, 26]. Many quantum algorithms require huge hardware resources than what is available in near-term quantum computers and quantum computers seem to be limited in terms of what problems can be efficiently solved with them. They are particularly challenging to use for spatio-temporal data. Therefore, studies have been initiated in the direction of hybrid computational approaches that can leverage the strength of both classical and quantum resources to gain computational advantages [27].
Quantum computers can be applied to execute a specific task that exploits quantum information processing, whereas classical computers can be helpful for data pre-processing and optimization. One significant benefit of this method compared to classical approaches is the substantial decrease in model parameters, which can help reduce overfitting often found in classical ML [20–22, 28]. Additionally, under certain conditions, quantum models can learn more quickly or achieve better accuracies for particular tasks. The previous study on hybrid classical-quantum models emphasizes the importance of quantum approximate optimization algorithms and quantum circuit learning, where the variational quantum circuit is essential as a quantum element, its parameters being updated by a classical computer [28–30]. This synergy is also helpful in developing efficient algorithms, particularly in the field of spatio-temporal data learning. The advantages of the hybrid model include reduced quantum resource requirements and improved scalability with certain challenges such as quantum state preparations and gate errors [31, 32].
The novel contributions of the present study are the following:
a novel hybrid spiking neural network-quantum kernel computation (SNN-QC) architecture is designed to classify spatio-temporal EEG data.
a new quantum feature map is introduced to overcome the limitations of existing feature maps by Suzuki et al. [8].
hyperparameters-tuned feature maps are employed for quantum-enhanced kernels, and the results are evaluated using cross-validation through multiple case studies and model comparisons.
the advantage of utilising an SNN for spatio-temporal frequency feature extraction from EEG data over CSP feature is demonstrated.
The remainder of the paper is organized as follows: Sect. 2 provides background information on SNN models and quantum kernels. Section 3 provides the proposed SNN-QC framework, different encoding functions, EEG data, and implementation details. Section 4 presents results and analyses based on the framework’s performances, followed by a discussion and limitations in Sect. 5. Finally, Sect. 6 provides a conclusion with some proposals for future directions.
Background
SNN and SNNCubeube
A spiking neural network (SNN) is a structure that comprises spiking neurons and connections between them, where information is represented as sequences of spikes (binary units of 0 or 1 at a time). A class of SNN called brain-inspired SNN [1], incorporates brain principles of structure and learning and is best designed to process spatio-temporal brain datasets by converting raw inputs into trains of spikes. NeuCube [2] is an architecture which includes a 3D SNN structured according to the Talairach brain template [33]. This architecture offers more biologically realistic temporal processing capabilities within a computational model and is also considered energy-efficient due to its use of – capturing discrete action potentials [1, 34]. Consequently, this direction of brain data modelling has shown advantages over other neural network architectures, as it provides event-based information processing ability where a neuron processes information upon detecting spikes [1, 34].
There are several approaches for utilising SNN for brain data modelling and analysis, but in this work, we have adopted the NeuCube, a brain cube architecture [2]. NeuCube is a 3D brain-inspired SNN architecture specifically designed but not restricted to modelling and analysing spatio-temporal datasets related to brain activities and cognitive tasks. This model integrates spatial and temporal information in its learning process, potentially enhancing the model’s effectiveness by harnessing critical spatio-temporal dynamics. By preserving brain spatio-temporal information, it helps identify the role of important features and brain areas, rather than relying solely on temporal information as in other computational paradigms. This architecture has three distinct modules: a spike encoder module, where an input signal is encoded into spikes; an SNNCube where spike trains are learned in unsupervised learning mode; and a deSNN classifier for classification or regression tasks. Fig. 1 provides an overview of the general NeuCube architecture [2].
[See PDF for image]
Figure 1
An overview of the 3D NeuCube architecture having three modules: (i) a spike encoder; (ii) SNNCube for spatio-temporal learning of spike sequences; (iii) a deSNN classifier [2]
Spike encoding schemes are crucial in building SNN models by transforming analogue inputs into discrete spike events, thereby enabling information processing that mimics neuronal activities. Spike encoding methods can be divided into two major categories: rate coding and temporal coding [35]. In this work, we have applied a threshold-based representation approach – a subclass of temporal coding following the study by Petro et al. [36] that suggests suitable spike encoders for temporal data. Now the spike trains are mapped onto spatially defined brain areas within the SNNCube. This type of mapping is not necessary when the spatial locations of the spatio-temporal variables such as climate or financial data, are unknown. In that case, other methods can be used for mapping the input variables into the SNNCube [37]. For brain data modelling, spatial mapping onto the SNNCube is defined using brain templates and specific mapping criteria. In NeuCube, neurons within the SNNCube are positioned according to the 3D coordinates defined by the Talairach template and a 10-10 International EEG coordinate system mapping protocol [2, 33].
Following this protocol, the encoded spike trains are mapped onto the SNNCube, with each EEG channel or variable mapped into a spatially corresponding neuron in the SNNCube. Thus, this workflow potentially helps to capture and preserve the spatio-temporal memory and connectivity of neurons within the SNNCube, leading to improved explainability of a brain model and data analysis [1, 38, 39]. The SNNCube is a 3D reservoir module within NeuCube architecture that allows spike trains to be placed into a higher-dimensional space before learning begins in an unsupervised mode. Furthermore, the learning of the reservoir incorporates spiking neurons based on the Leaky Integrate and Fire (LIF) model with recurrent connections. The input spike sequences from the data are processed through unsupervised learning using the Spike Timing Dependent Plasticity (STDP) rule. Trained SNNCube features are extracted as spike frequency state vectors to develop the proposed framework. Thus, we create a hybrid computational SNN-QC framework, where the initial learning of inputs is performed using the SNN process, followed by the quantum process.
Quantum kernel
Kernel methods aim to project the data from a lower-dimensional space to a higher-dimensional space in order to enhance data separability, revealing data patterns through linear or nonlinear transformations. In classical ML, nonlinear kernels most likely work cohesively with complex datasets; for instance, the radial basis function kernel is particularly useful for such datasets. Mathematically, it calculates the similarity between data pairs in a feature space. A kernel function based on the inner products of a feature map is written as: . Furthermore, kernels are used in ML algorithms like Support Vector Machines (SVM), enabling these methods to identify the suitable hyperplane for classifying data into their appropriate classes [40]. The concept of classical kernel methods can be extended to quantum computing algorithms for designing quantum kernels, where quantum states are governed by measurements, analogous to the dot product of quantum states in quantum Hilbert space [41, 42]. Drawing a parallel cohort, a quantum kernel can be estimated using quantum circuit simulation and form a quantum classifier, also referred to as a quantum support vector classifier (QSVC) [43, 44].
In the present landscape of quantum computing for ML applications, particularly with quantum kernel methods, quantum feature maps play a crucial role. As identified in previous studies, a feature map can be considered as the translation of classical data into quantum states [5, 43]. A suitable feature map should be more expressive and capable of capturing the underlying data patterns [44, 45]. In the noisy intermediate scale quantum (NISQ) era, where we do not have perfect hardware, it is ideal to limit circuit depth in order to achieve the maximum efficiency of a quantum model. An initial two-qubit feature map with entangle-layer up to second-order, known as a ZZ-feature map, was proposed for achieving quantum advantage using synthetic datasets [5, 44]. Fig. 2 provides a quantum feature map circuit composing data encoding functions , single-qubit unitary gates, and CNOT gates. If the feature map circuit is repeated twice, then the resulting quantum kernel becomes sufficiently complex to compute classically. Thus, the experimental study designed by Havlíček et al. [5] suggests that this setup is useful to achieve a quantum advantage by implementing a complex transformation that is classically hard to simulate. This also implies that a single-layer quantum circuit would not provide sufficient complexity in estimating the inner product and therefore, can not provide the quantum advantage [5, 7].
[See PDF for image]
Figure 2
A circuit representation of a quantum feature map expanded up to the second-order using as encoding functions, U as arbitrary single-qubit gates, and pair of CNOT gates [8]
Quantum kernel formulation is provided next for gate-based computing. In gate-based computing, a quantum feature map can be represented by a quantum circuit. This circuit consists of an ensemble of unitary gates, including Hadamard gates used for qubits superposition and entanglement layers [5]. Pauli single-qubit gates are employed to construct a quantum kernel that utilizes encoding functions to transform the input, into the quantum state space as follows:
1
where are the Pauli gates, and represents the nonlinear data encoding functions of S order expansions, where [5, 23]. The factor acts as rotational factors that complement a qubit’s phase rotation while encoding the data into quantum space. The Pauli operators and rotational factor can be regarded as hyperparameters for a quantum kennel that can play a crucial role in data transformation. Moreover, these hyperparameters emphasize the data encoding process to subsequently attain complex data distributions, and thus gain advantages in predicting the true class labels better. A general feature map is defined by the unitary operator with the Hadamard gate as: Here, is the function used to transform the input data into a higher-dimensional feature space and is given by: where ’s represent the set of user-defined data encoding functions. These encoding functions can be high-order nonlinear functions useful to encode the classical data into the quantum state space as , with the circuit initializing at states [5, 8].In the present quantum realm, achieving quantum advantage is not straightforward, especially when dealing with classical datasets derived from real-life applications. However, recent encouraging works have demonstrated quantum advantages by transitioning from earlier experimental studies based on synthetic datasets to real-life applications [5, 6, 8, 24, 43–46]. Building on this progress, the proposed study focuses on an advanced computational approach to highlight the application of neuronal data using a hybrid SNN-QC architecture in the following sections.
Methodology
SNN-QC
The proposed SNN-QC framework, illustrated in Fig. 3, presents an advanced computational approach. This framework introduces a novel direction for ML by integrating SNNs with quantum kernels, enabling the development of quantum-enhanced ML applications for the first time. The proposed framework aims to highlight two important aspects: (1) discovering quantum advantages through quantum kernels for handling complex and real-world datasets, and (2) utilising advanced biological learning systems such as SNNCube to acquire the most appropriate spatio-temporal information (spike frequency).
[See PDF for image]
Figure 3
A hybrid SNN-QC framework for developing quantum-enhanced classifiers with the following steps: (i) input EEG signals are transformed into spike trains; (ii) spikes are mapped onto a spatio-temporal filter (SNNCube) using known locations (i.e. EEG channels locations) and learned synchronously; (iii) an output module provides trained spiking features in the form of spike frequency state vectors; (iv) these spiking features are used as input vectors to quantum kernel classifiers
Spikes encoder
Since SNNs rely on transforming real-valued spatio-temporal sequences into spike sequences before training, this study uses a threshold-based spike encoder - a form of temporal contrast encoder [47]. This method generates a spike, only when a change in the input signal surpasses a specified threshold, recording the precise timing of each spike to indicate a change in the signal’s value (temporal change). The threshold is determined by comparing the absolute change between consecutive signals; if the change is significant, a positive or negative spike is produced, depending on the positive or negative direction of the change [36, 47].
SNNCube learning
Building on the explanation of the SNNCube model in Sect. 2.1, we now provide additional details on how SNNCube learning is applied in the proposed work. Spike trains corresponding to known input variables (e.g. EEG channels), are then mapped onto the 3D SNNCube as a spatio-temporal filter. As illustrated in Fig. 3(ii), the spiking information from all channels is synchronously trained using an unsupervised STDP rule. The SNNCube accumulates spatio-temporal information and generates spike frequency state vectors across the input neurons [1, 2]. We name these state vectors as spiking features corresponding to the input variables. For every input neuron in the SNNCube, the number of its generated spikes (or the frequency of spiking within a time interval) is counted during the learning of each spatio-temporal sample belonging to a defined class. All these numbers form a feature vector that represents this sample and are extracted using an output module. Then spiking frequency features as vectors are extracted from the neurons that correspond to the input variables (e.g. EEG channels).
SNNCube output
SNNCube trained frequencies are accumulated and extracted through the output module and used as new spiking feature state vectors. The outcome of spatio-temporal interactions produces a frequency value corresponding to each channel. The frequencies collected over each trial (sample) form a 14-dimensional spiking feature state vector. These feature vectors possess distinct characteristics compared to the raw signals and are subsequently renamed as AF3*, F7*, F3*, FC5*, T7*, P7*, O1*, O2*, P8*, T8*, FC6*, F4*, F8*, and AF4*. The derived features are now used as input features for preparing quantum state space using a quantum feature map.
Quantum feature map circuit
Quantum kernel classifiers are developed for different binary classification tasks. A quantum feature map uses different data encoding functions for data translation, analogous to classical feature maps in quantum Hilbert space. Data encoding functions can either be linear or nonlinear, depending on the problem requirements. For real-life applications, where datasets exhibit intrinsic complex patterns, a nonlinear data encoding function is often more effective. Different feature maps have demonstrated a superior ability to capture complex data patterns when compared to traditional neural networks and ML techniques, leading to analytical quantum advantage [5, 6, 8]. Building on this concept, we propose a novel quantum feature map (F1) defined in Eq. (2). The feature map F1 offers high-order nonlinearity in the process of encoding classical inputs into quantum state space. Additionally, we have analysed other state-of-the-art feature maps (F2)-(F6) defined in Eqs. (3)-(7) proposed by Suzuki et al. [8] and conducted comparative analyses as part of the proof of concept method outlined in Fig. 3. The methodology will follow the quantum feature map circuit design with different encoding functions defined in Eqs. (3)-(7) with two layers of circuit implementation over the prepared datasets.
2
3
4
5
6
7
Mathematically, the proposed mapping function F1 can better address problems encountered by other functions, e.g. the sensitivity issues that arise with the data encoding function F5. F5 can be sensitive at the points where cos(x) becomes zero, while F1 faces no such issue. Quantum kernels induced by feature maps demonstrate notable efficacy in capturing complex data relationships and can substantially enhance generalization capabilities [5, 8, 48]. These characteristics harness a pivotal role of quantum kernels in determining the quality of quantum models. There are some important factors influencing the quantum models, for example, proper tuning of hyperparameters in a quantum kernel can provide quantum-enhanced kernels advantageous in ML tasks. In the proposed framework, selecting a suitable feature map is a crucial step for transforming data into quantum state space to gain quantum advantage.
Experiment setup
EEG data
The proposed SNN-QC is demonstrated on a case study for binary classification tasks using EEG data collected from a standard 14-channels (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4) device recorded by Emotiv [49] and available from NeuCube [50]. The data acquisition paradigm involved a single participant, with EEG samples collected from 14 channels at a sampling rate of 128 Hz. The dataset consists of 20 independent trials of 1-second duration for each of the wrist movements of the subject — up, down, and held straight.
Implementation details
To maintain a low circuit depth for the proof of concept, the top two of the 14 spiking features are selected using the Boruta algorithm, which identifies the most important variables within the dataset. The Boruta algorithm can be viewed as an extension of a Random Forest technique, which identifies the most meaningful features by iteratively eliminating less significant features through statistical analysis [51]. The Boruta feature selection is applied after the SNNCube output, aiming to identify the most relevant features to be used in quantum circuit. This approach does not affect the spatio-temporal information learning or the extracted features. This approach for finding minimal spatio-temporal features from spatio-temporal data, can be used to extract predictive data markers, which needs to be explored in the future.
Given that the EEG data has three distinct classes, the classification problem is divided into three binary pairs to develop three distinct quantum classifiers. This way, we can provide different data distribution inputs to models, demonstrating the proposed methodology’s capability. Three distinct binary classifiers are defined as follows: SNN12 for Classes 1 and 2 inputs, SNN13 for Classes 1 and 3 inputs, and SNN23 for Classes 2 and 3 inputs. Fig. 4 illustrates data inputs of binary spike frequency state vectors.
[See PDF for image]
Figure 4
Binary inputs of spike frequency state vectors are selected using the Boruta feature selection algorithm extracted by the SNNCube module for three distinct binary classifiers: (a) SNN12, (b) SNN13, and (c) SNN23
In addition, the raw EEG dataset is also processed using the Common Spatial Pattern (CSP) approach. CSP is a well-known tool for extracting meaningful features from brain signals commonly used in motor imagery tasks [52]. It is a statistical approach that uses a linear transformation to project a multi-channel dataset into a lower-dimension spatial subspace. CSP conducts diagonalization of the covariance matrices simultaneously for both classes. Before applying CSP to the raw datasets, a band-pass filter with a cut-off frequency of 0.5–48 Hz has been applied. Two CSP components are extracted to align with the designed binary classification model, and are further utilized in the comparative analysis. Fig. 5 illustrates CSP component 1 and component 2 obtained from raw-EEG signals for different binary combinations. Another three distinct binary classifiers based on CSP components are defined as follows: CSP12 for Classes 1 and 2 inputs, CSP13 for Classes 1 and 3 inputs, and CSP23 for Classes 2 and 3 inputs.
[See PDF for image]
Figure 5
CSP components 1 and 2 are extracted from the raw EEG dataset for three distinct binary classifiers: (a) CSP12, (b) CSP13, and (c) CSP23
A single-subject EEG dataset with 20 trials for each of three tasks resulted in fewer data points corresponding to each class. However, this offers an opportunity to build a proof of concept, exploring the potential for developing a computational model with a smaller sample size and circuit depth. Fig. 4 and Fig. 5 provide the distribution of different input features across distinct wrist movement tasks, indicating the nature of data complexity. This complexity can challenge a classifier model’s ability to detect the task with higher accuracy, as these datasets do not follow any identifiable distributions. The experiments are performed with the PennyLane simulator, an open-source Python platform for quantum computation [53].
Evaluation criteria
The classification outcomes are validated using different statistical measurements such as accuracy and Matthews correlation coefficient (MCC) using the mean value of the 5-fold cross-validation method [54]. In 5-fold cross-validations, datasets are equally divided into 5-folds, where each fold contains an equal proportion of the dataset for training and testing (i.e. 20% data points in each fold). Further, datasets in each fold are shuffled and randomized to ensure the model’s reliable performance and robustness.
MCC provides a reliable statistical metric that yields a high score only if the prediction obtained good scores across all four categories in the confusion matrix (TP, FN, TN, FP), while accounting for the proportion of both positive and negative elements in the data. MCC score ranges from −1 to +1, where −1 indicates perfect misclassification, +1 represents perfect classification, and 0 is the expected value for a coin-tossing classifier. Here, the MCC range is maintained between −1 to +1. MCC provides suitable statistics for binary classification problems, offering a more accurate assessment compared to the F1-score and accuracy [54].
Furthermore, binary classification results from quantum classifiers are also compared with popular SVMs – linear (Lin) and radial basis function (RBF) kernels. Additionally, many other ML classifiers such as Gaussian Naïve Bayes (NB), Linear Discriminant Analysis (LDA), Decision Tree (DT), Random Forest (RF), and Multilayer Perceptron (MLP) are also compared to provide detailed validation of the proposed methodology. In the next section, we present and analyse the results.
Results and analysis
Here, we analyse the results in two directions: (1) the advantage of using SNN features for quantum classifiers following the SNN-QC framework and (2) the use of CSP components directly to develop quantum models. Thus, this would lead to realising the important role of both feature extraction approaches when exposed to different quantum classifiers as well as the introduced feature map.
Quantum classifier for SNNCube features
The proposed SNN-QC framework is implemented on SNN-based features using different feature maps (F1-F6). The experiments for binary classification tasks were conducted and analysed using different hyperparameter-tuned models. Performance analysis of hyperparameter-tuned quantum models is also evaluated to examine the impact of mapping functions based on their expected ability to classify data.
Results shown in Fig. 6 can be analysed in two ways: (a) variations among quantum kernels, and (b) comparisons between quantum and parameter optimized classical models for supervised learning tasks. Variations of quantum kernels can be useful in recognizing the individual role of a feature map. For classifier SNN12 (Fig. 6(a)), the testing accuracy indicates that quantum kernels F1, F2, and F4 each achieve an accuracy of 0.85, achieving higher performance than the other three quantum kernels. The lowest-performing quantum kernels are F3 and F6 with an accuracy of 0.75 and 0.72 respectively. In classical models, RBF with an accuracy of 0.85 has performed equally well as F1. Also, Lin, LDA, and DT classifiers achieved an accuracy score of 0.82 each with NB with an accuracy of 0.72 being the least performing classifier. MCC scores support the classification accuracy providing the statistical robustness in the results whilst supporting the accuracy trends completely. Specifically, MCC scores for quantum kernels mirror the accuracy trends, maintaining consistency and deviating in tandem when accuracy changes. The introduced kernel F1 has achieved the highest accuracy and highest MCC score of 0.69 for SNN12, providing confidence in the proposed feature map (F1) and supported by the other feature maps (F2 and F4) as well as the classical models (RBF). Similar trends can be seen with standard deviations for all the models. As the datasets are sparse, the standard deviation is assumed to be higher as the cross-validation trials differed by a higher margin.
[See PDF for image]
Figure 6
The classification results for SNNCube features using the proposed hybrid SNN-QC framework for (a) SNN12 classifier, (b) SNN13 classifier, (c) SNN23 classifier, and (d) SNN123 - the average mean accuracy for all three pairs of SNN classifiers
In contrast to the SNN12 classifier, the results for the SNN13 classifier in Fig. 6(b) show that F1 achieves better performance than the rest of the quantum kernels (F2-F6), while maintaining either similar or better accuracy and MCC scores with classical models. Additionally, the least performing quantum kernels are F2 and F5 with an accuracy of 0.70 and 0.75 respectively. For SNN13, F1 has performed either equally well, with an accuracy of 0.85, compared to popular classical models like Lin, RBF, LDA, DT, and MLP classifiers, or has performed better compared to NB and RF. Also, the MLP model equally supports the advantage of F1 with an accuracy of 0.85. To support the accuracy trend, F1 achieved the highest MCC score of 0.72 as compared to best-performing classical models such as RBF, DT, and MLP of 0.70, 0.71, and 0.70, respectively. Moreover, the F1 kernel demonstrated improved performance compared to F2–F6 kernels, indicating its advantage in the presented case study.
Results for SNN23 show a similar trend in Fig. 6(c), particularly in quantum kernels accuracy and MCC scores. The proposed feature map (F1) achieved a notable analytical advantage with an accuracy of 0.82 and an MCC score of 0.65 as compared to F2 (0.72 accuracy, 0.47 MCC), F3 (0.60 accuracy, 0.18 MCC), and F5 (0.57 accuracy, 0.17 MCC). Additionally, F1 demonstrated a superior performance over classical models, except for RF which provided an accuracy of 0.87 and MCC score of 0.75. We also provide the average mean accuracy for three pairs of binary classes in Fig. 6(d). It supports the argument that F1 achieved the highest accuracy of 0.84 and MCC score of 0.68 demonstrating its advantage as compared to the other state-of-the-art feature maps (F2-F6). F2 followed with an accuracy of 0.75 and MCC of 0.52 with the least performing quantum kernels F3 (0.70 accuracy, 0.41 MCC) and F5 (0.71 accuracy, 0.45 MCC). F1 performed equally well with RBF with the same statistical metrics, whereas RF performed closer to F1 with an accuracy of 0.83 and an MCC score of 0.67. Next, we present and analyse the classification results for CSP inputs.
Quantum classifier for CSP features
CSP components (Fig. 5) exhibit different data patterns compared to features obtained from SNNCube (Fig. 4). Thus, it provides further scope for evaluating the generalization ability of quantum-enhanced models to some extent. In this case, CSP inputs are directly mapped onto the quantum feature space, and quantum kernels are constructed with different hyperparameters. Similar to the previous binary classifiers set-up, three different CSP binary classifiers are formed and evaluated using same cross-validations.
Fig. 7 presents the quantum classification results of CSP components. The results trend suggests inferior classification outcomes compared to spiking features. CSP12 classifier (Fig. 7(a)) suggests a better performance of F1 supported by a higher accuracy of 0.7 and MCC of 0.4. Other kernels such as F5 and F6 also have achieved higher performance with an accuracy of 0.65 and 0.62 respectively. From the classical counterparts, NB, LDA, and RBF have yielded lower accuracies of 0.62, 0.6, and 0.6 respectively to quantum kernels (F1 and F5). CSP aims to provide linear separability between two tasks, yet CSP components have not achieved better separability for CSP12, as shown in Fig. 5. With this realization, it can be stated that F1 has provided a suitable feature map resulting in improved metrics in higher dimensional space.
[See PDF for image]
Figure 7
The classification results for CSP features using quantum kernels for (a) CSP12 classifier, (b) CSP13 classifier, (c) CSP23 classifier, and (d) CSP123 - the average mean accuracy for all three pairs of CSP classifiers
CSP13 infers better metrics than CSP12 predominantly because CSP13 components have higher separability as noticeable in Fig. 5. Better performed quantum classifiers for CSP13 are F1 and F4, whereas RBF, LDA, and MLP are among classical models. The results, shown in Fig. 7(b), demonstrate that F1 achieves higher performance compared to the alternative models evaluated (F2-F6) with an accuracy of 0.87 and MCC score of 0.78. Classical models such as RBF, LDA, and MLP each with an accuracy of 0.85 and MCC of 0.74 have appeared closer to F1. For CSP23, quantum kernels have appeared beneficial. F1 (accuracy of 0.75), F4 (accuracy of 0.70), and F6 (accuracy of 0.70) have provided better accuracy and MCC scores as provided in Fig. 7(c). Additionally, among classical classifiers — NB, LDA, and MLP have performed closer to F1 and F4 with accuracy of 0.70 each. Similar to CSP12, CSP23 components also have exhibited lower separability, leading to lower performance across the models. Therefore, it is amply clear that the proposed feature map F1 led to a better quantum classifier model, comprehensively supported by multiple statistical metrics.
Finally, an average of cross-validation mean values based on CSP components for all three pairs of binary classifiers is also provided in Fig. 7(d). F1 with an accuracy of 0.77 and MCC of 0.53 again shows the analytical advantage over all the other quantum kernels (F2-F6) and classical classifiers. Results from optimized classical models have shown merit and appeared competitive to F1, as demonstrated by strong statistical metrics. For instance, LDA achieved an accuracy of 0.71 and MCC score of 0.39. Thus, F1 provided the analytical advantage when extended to different types of data distributions, either on SNN-based spiking features or CSP components, as amply demonstrated by our results in Fig. 6 and Fig. 7.
The results demonstrated that quantum classifiers using SNN-based features have outperformed those based on the conventional statistical feature selection method, CSP. This enhancement in feature extraction allows quantum kernels to achieve superior performance, making the proposed framework a promising approach for handling spatio-temporal datasets. We next discuss the influence of the hyperparameter.
Hyperparameter analysis
In quantum kernel computations, the data encoding process plays a crucial role and can depend on various factors, such as single-qubit gates and the rotational factor , which together form the hyperparameters. Fig. 8 highlights the role of hyperparameter α in enhancing classification results across different quantum kernels. It is common to use different unitary gates during the data encoding into quantum feature space, therefore here we emphasize the role of the rotational factor α, which is less often discussed. Different values of α have led to deviations in the kernels’ performances, as shown in Fig. 8, for three distinct SNNCube classifiers. The tuning of α appears to improve quantum kernel performance by allowing a complex and flexible hyperplane, thus providing greater flexibility in the model. The α-hyperparameter for quantum kernels is tuned empirically to visualize its impact in classifying tasks. Further, hyperparameters of distinct classical kernels and ML classifiers are optimized by a cross-validated grid search method, and the optimized values are availed in the Supplementary file.
[See PDF for image]
Figure 8
The impact of tuning the rotational factor (α-hyperparameter) on the binary classification performance of spike frequency state vectors: (a) SNN12, (b) SNN13, and (c) SNN23 across various feature maps
Hyperparameter analysis using values of , and 3 are used and their impact is highlighted in Fig. 8, across all quantum kernels. For SNN12 and SNN13, a value of with gates with entanglement provided an improved solution. For SNN23, values of with gates yielded better results. The α-tuning is conducted under the same cross-validation criteria, ensuring consistency across quantum kernels for SNN features. These findings demonstrate the importance of appropriate hyperparameter tuning in developing quantum-enhanced classifiers for complex datasets. However, hyperparameter tuning can be sensitive and experimentally challenging for other complex problems, as well as large data sizes, and requires special attention. We evaluate the proposed study’s performance on quantum hardware next.
Quantum hardware validation
Here, we present the quantum hardware validation of the proposed framework using IBM’s quantum hardware accessible remotely using an open plan facilitated by IBM Quantum, as well as a high-fidelity noisy simulation that can replicate real hardware properties. These demonstrations establish the fact that the algorithm can be seamlessly implemented on quantum processing units (QPUs) once we have a fault-tolerant quantum machine. The validations are demonstrated for the SNN12 classifier using the feature map F1, and the test accuracy is observed. The quantum kernel task has been implemented using a currently available 127-qubit IBM Brisbane QPU (hardware), and adapting the real-time calibration data for noisy simulation [55]. Details of the hardware online information, comprising processor type, version, T1, T2, and readout error rates are presented in Tables 3 and 4, in the Supplementary file.
Prior to hardware execution, basic error mitigation steps are adopted. The primary strategy involved leveraging Qiskit’s transpile function, which enables noise-aware compilation tailored to the target quantum hardware [56]. The transpile function has circuit, backend, and optimization_level as inputs. The circuit specifies the quantum circuit to be transpiled. The backend provides details about the target quantum processor, including its connectivity, and noise parameters associated with individual qubits. The optimization_level determines the extent and types of compiler optimizations applied during the transpilation process [56, 57]. This process uses real-time calibration values to map the circuit onto physical qubits with the lowest gate and readout errors. Thus, it reduces the impact of gate errors and decoherence, which can lead to a reliable execution on the hardware.
The hardware experimental result for SNN12 using two-qubit simulation provided a test accuracy of 0.87 on a single-trial. It is worth mentioning that the single-trial result with a noiseless simulator showed the same performance as the hardware; however, the average mean cross-validation accuracy for the same classification was 0.85 using a noiseless simulator. Ideally, a quantum hardware result should be inferior to a noiseless simulator due to the presence of hardware noise. Nonetheless, the hardware result supports the classification performance for a single-trial execution. The results suggest that small circuit implementations can achieve near-perfect outcomes, as they rely primarily on single-qubit and two-qubit gate fidelities. To further validate the hardware performance of the classifier, noisy simulations are employed for multi-trial evaluations.
Current quantum hardware is inherently susceptible to noise arising from gate errors and decoherence. While noiseless simulations can estimate ideal performance, they often fail to reflect real hardware behaviour. To bridge this gap, we performed further analysis using a high-fidelity noisy AerSimulator, which offers multiple simulation methods and configurable options to emulate hardware results [56]. In its default mode, the AerSimulator reproduces the execution characteristics of real quantum hardware, enabling realistic performance assessment and quantification of the performance gap due to noise and decoherence [56, 58]. This noisy simulation approach is widely adopted to approximate hardware behavior by incorporating device-specific properties [58–60].
To achieve the highest possible fidelity to real hardware, we constructed a high-fidelity noise model using real-time calibration data of the IBM-Brisbane hardware. This approach ensures that the simulated environment deviates minimally from the noise characteristics of the hardware, such as gate errors [56, 58].
The hardware validation of the classifier was assessed through high-fidelity noisy simulations, analysing its performance across an increasing number of trials. The outcomes, averaged over multiple runs, are presented in Fig. 9, which demonstrates consistent classifier behaviour under realistic noise conditions. Variations across trials indicate that, for the specific low-depth circuit investigated, noises from simulated hardware remain minor. For a single-trial, classifier performance remains similar across hardware, noisy simulator, and noiseless simulation. Furthermore, Fig. 9 illustrates the effect of noise as the number of trials increases, highlighting its significance in current hardware validation. Consequently, reliable and noise-resilient performance can be achieved by averaging results over multiple runs with shallow circuit depths and limited data sizes, as shown in Fig. 9. The effect of simulated hardware noise was observed; more broadly, such analysis reflects an important step in quantum hardware implementation.
[See PDF for image]
Figure 9
Quantum noisy simulation results mimicking real-time hardware properties of the SNN12 classifier with feature map F1, averaged over multiple trials. The simulated accuracy (± standard deviation) demonstrates strong resilience, showing only slight degradation in the noisy environment compared to the noiseless performance
Furthermore, the classifier exhibited only minor deviations across several trials; this stability is attributable to the small circuit depth and data size. Similar findings have been reported in recent studies, where low-depth quantum kernels maintained robust performance under noise and were shown to be effective for exploring generalization and error behaviour in quantum supervised learning tasks with classical data inputs [61, 62]. These observations collectively support the use of low-depth quantum circuits with small sample sizes and reinforce the noise-resilient nature of the proposed implementation on quantum hardware.
Discussion and limitations
The proposed hybrid SNN-QC framework provided evidence of quantum analytical advantages in multiple EEG wrist-movement classification tasks. The framework utilises a small, single-subject EEG dataset as a case study. This smaller dataset is chosen to serve as a proof of concept for a novel SNN-QC framework, allowing to validate its feasibility and performance on spatio-temporal data. Furthermore, the use of SNN-based feature extraction with quantum kernels introduces a novel hybrid paradigm that extends beyond empirical performance gains.
This work serves as a foundational step, demonstrating the feasibility and synergy of combining neuromorphic and quantum computing for real-world applications and opening a new branch of QML applications. We believe that proposed study potentially offers long-term benefits in scalability and energy efficiency in future, particularly as quantum hardware matures, paving the way for more advanced neuromorphic-quantum systems offering: (1) enhanced classification performance through the proposed feature map F1, in multiple scenarios; (2) a framework incorporating spatio-temporal interactions using the NeuCube, an SNN model that provided superior features than statistical CSP components; (3) explainability of the spatio-temporal data provided by the NeuCube SNN for a better understanding of the data in contrast to the black box approaches; (4) potential for minimizing the energy demand by using neuromorphic computing platforms over classical ones.
The SNN-QC, when implemented with the novel feature map F1, demonstrated consistently better performance across diverse tasks compared to existing feature maps and conventional ML models. While F1 did not thoroughly outperform every baseline model in all tasks, it showed a robust and generalizable advantage in studies over multiple tasks with limited data size, as used in this study. As shown in Fig. 6(d) and Fig. 7(d), F1 provided superior performance compared to both quantum and classical models.
The hyperparameter tuning played an important role in enhancing classification performances across distinct quantum kernels. To ensure a fair comparison, we tuned identical hyperparameter settings across all feature maps (F1–F6). Under this consistent setup, hyperparameter-tuned F1 demonstrated better overall performance across different tasks, implying F1’s generalization capability to some extent when exposed to different tasks, as shown in Fig. 6(d) and Fig. 7(d). Additionally, the hyperparameter sensitivity of F1 has been discussed in a related study elsewhere [48], supporting the generalization to other datasets.
The performance of quantum classifiers also illustrated variations depending on the underlying data patterns in different case studies. Both SNN-based and CSP inputs have exhibited different data patterns, which make them challenging to classify accurately. A feature map possesses the ability to mimic certain data patterns, subsequently enhancing the model performance [8, 48]. A quantum kernel’s good performance implies that the feature maps-led kernels could have predicted the data pattern better than the classical models [8, 48]. Feature map F1 led to a superior classifier for both SNN-based and CSP inputs, offering a clear quantum advantage as well as the ability to generalize to some extent when exposed to different data patterns.
The SNN-QC framework also offers the interpretability and visualization mechanisms, which are particularly useful for understanding network connectivity and brain regions, as exemplified by the EEG case study data in Fig. 10(a). Fig. 10(b) illustrates the visualization of the dynamic interaction between the 14 variables over time, leading to improved SNNCube characteristics [63]. The problem in classifying spatio-temporal data is not only in achieving a high accuracy in the classification results, but in better capturing informative and explainable spatio-temporal patterns, that could lead to a higher classification accuracy and a better understanding of the data. In previous studies [64–66], connections learned in the SNNCube after training on EEG data are represented as spatio-temporal fuzzy rules. SNNCube offers notable benefits by preserving spatio-temporal information through a 3D brain-inspired reservoir, which is well-suited to capture relevant details during the processing of EEG data [38, 39]. Next, the limitations and scope of the present work are discussed in detail.
[See PDF for image]
Figure 10
(a) The connections learned in the SNNCube on the EEG case study data; (b) a dynamic interaction network extracted from the SNNCube, explaining the interaction of the input variables over time [63]
I. Dataset and Generalization: The functionality of the proposed framework is illustrated on a small, still real-world, spatio-temporal EEG data set. This study utilizes a small, single-subject EEG dataset as a case study. This smaller dataset is chosen to serve as a proof of concept for a novel SNN-QC framework, allowing to validate its feasibility and performance on spatio-temporal data. While the results demonstrate strong merits through the performance analyses, we acknowledge that the generalizability of these findings is constrained by the size and scope of the dataset. Therefore, this work is not intended to provide broad, generalized conclusions but rather to establish a robust foundation as proof of concept for future research. A clear direction for future work will involve pursuing broader validation on larger and more diverse datasets, which will not only assess the framework’s scalability and generalizability but also help unlock the potential of quantum encoding functions and feature optimizations for real-world applications.
II. Quantum Feature Map: We have presented a novel feature map (F1) which has provided analytical advantage in many different tasks. The design of the proposed feature map (F1), like that of many existing quantum feature maps [8], remains primarily heuristic. This presents a limitation for the generalizability of data embedding in quantum information theory, as developing an effective quantum feature map is both non-trivial and challenging. In practice, this often involves generating multiple candidate maps and empirically evaluating them through extensive performance analyses [5, 8]. In this work, F1 was developed and selected based on its strong empirical performance across multiple classification tasks. However, we acknowledge that a rigorous theoretical grounding and a formal analysis of its generalization properties remain open research directions.
III. Circuit Depth and Scalability: SNN-QC used a small depth circuit as the proof of concept, as well as aligning with the foundational study in terms of implementation of quantum kernels [5, 8]. The current study employs low-depth circuits; however, the framework is modular and can be extended for deeper circuits and more qubit implementations in the future. Quantum kernel methods demonstrate flexibility and generalizability to circuits with greater depth [6]. However, increasing circuit depth may result in exponentially concentrated measurement outcomes [6, 61, 62]. Moreover, circuit depth and generalization under the anticipated improvements in quantum hardware are important areas of ongoing research.
IV. Hardware Noise, Error Mitigation, and Reproducibility: The proposed framework is validated on both quantum hardware and high-fidelity noisy simulations. Basic error mitigation strategies, such as noise-aware transpilation and device-specific calibration, are used to ensure reliable execution on the hardware. Trial iterations demonstrated stability and merit in the implementation, even under noisy conditions. Confidence in the results is further supported by the close agreement between single-trial hardware execution and multi-trial noisy simulations. This consistency highlights the reproducibility of the proposed approach within the constraints of limited data size and circuit depth. Therefore, a more robust validation incorporating a larger qubit counts, deeper circuits, and comprehensive error analyses remain an essential direction for future research to establish generalizability under realistic noise and scaling conditions for QML applications.
V. Architecture Novelty and Broader Implication: The proposed hybrid SNN–QC architecture uses an advanced SNN approach to extract spatio-temporal features that are subsequently classified via a quantum kernel. The architecture leverages SNN not only as the third generation of neural networks and advances classical models, but also as unique models for capturing informative spatio-temporal patterns. This combination of SNN learning of spatio-temporal patterns with quantum kernel learning differs from prior hybrid classical–quantum pipelines [28, 32], and constitutes a novel hybrid approach for spatio-temporal data. Moreover, we believe that proposed study offers long-term benefits in scalability and energy efficiency in future, particularly as quantum hardware matures, paving the way for more advanced neuromorphic-quantum systems.
VI. Comparative Analysis: To the best of our knowledge, this study presents the first hybrid neuromorphic-quantum model and its application for spatio-temporal data classification, exemplified by EEG data. Our primary goal was to establish a robust proof of concept, supported by detailed validation within the constraints of the current setup for multiple classification tasks, validated through multiple quantum feature maps and baseline classical classifiers. While direct comparisons with other hybrid classical-quantum approaches are beyond the scope of the present study, we recognize their value and are keen to incorporate such comparisons in future work to further benchmark and generalize the SNN-QC framework.
Conclusion and future direction
The study introduced a novel hybrid computational framework, SNN-QC, which combines spiking neural networks with quantum kernels for the first time to develop quantum-enhanced advanced ML models for spatio-temporal data. The framework was illustrated as a proof of concept on a small single-subject EEG dataset as a case study, demonstrating the framework’s feasibility and performance. The SNN-QC results were demonstrated using six different feature maps, including the novel nonlinear feature map (F1). For SNN-QC validation, the performance of F1 was compared with five other state-of-the-art feature maps, F2-F6, as well as with Support Vector Machines (Linear and RBF), Naïve Bayes, Linear Discriminant Analysis, Decision Tree, Random Forest, and MLP models. The classification accuracy and MCC scores were calculated using a 5-fold cross-validation with data shuffling and randomization.
The results demonstrated that the use of an SNN for the extraction of spatio-temporal frequency features from the EEG data, prior to applying a quantum classifier, enhanced classification performance compared to not using an SNN (i.e. using CSP). Furthermore, F1 performance was superior to current state-of-the-art feature maps (F2-F6) and various ML baseline models in most of the tasks studied. The detailed analysis demonstrated the robustness of the experimental results, based on the statistical metrics derived from models. Additionally, hyperparameter analysis was included and based on the improved kernel performance, it is recommended to consider hyperparameter-tuning (α) for developing quantum-enhanced kernels for practical applications. Furthermore, the use of SNNCube as a knowledge discovery tool was emphasized due to its ability to extract learned features through spatio-temporal input variables, as a spatio-temporal feature extraction tool, and better explainability of spatio-temporal data.
The quantum hardware validations were performed to demonstrate the feasibility of the proposed algorithm on real quantum devices under the available hardware noise constraints. The hardware validations of the SNN12 classifier with F1 were conducted with a basic error mitigation strategy in two settings: a single trial on a real QPU and multiple trials using a high-fidelity noisy simulation. The results also showed that for a single trial, the hardware, a noisy simulator, and a noiseless simulator exhibited similar performance. This could be possible due to the smaller circuit depth and smaller data size. Additionally, results from repeated noisy simulations suggest that the hardware implementations are noise-resilient, with only slight variations over trials. Therefore, a comprehensive hardware error analysis remains essential to ensure the generalization, reproducibility, and robustness of QML models for real-world applications, as a future direction.
Although demonstrated in an EEG case study, this approach is designed for broader applications in the future. This computationally advanced framework offers a unique direction by combining neuro-inspired information processing with quantum information processing to develop quantum-enhanced models for practical applications. The SNN-QC used an advanced neuromorphic model to extract spike frequency state vectors, and these vectors as features, played a crucial role in quantum classifiers. The SNN-QC demonstrated improved performance compared to models that use statistical features rather than features extracted from an SNN, as an SNN accounts for the temporal interaction between the spatio-temporal input variables.
Overall, a clear direction for future work will involve pursuing broader validation across larger and more diverse spatio-temporal datasets to fully assess the framework’s scalability and generalizability. Such data sets would include brain EEG/MEG/fMRI data, multi-sensory streaming data for pollution or seismic activity prediction, and can be extended to quantum computation for bio-inspired real-world applications, including big data analytics, cognitive and health sciences, finance, and climate modelling.
Identifying suitable feature maps or data encoding processes in general, remains a challenging task for achieving model generalization capability, and therefore requires in-depth investigation to realize quantum advantages fully. Finally, we believe that the proposed SNN-QC framework, being the first in this field, and its positive case study results, will inspire new research in the direction of integrating neuromorphic and quantum computation, as both software and hardware developments.
Acknowledgements
The authors acknowledge the partial support provided by Ulster University Vice-Chancellor Research Scholarship for RJ. GP and SB acknowledge the partial support from the UKRI Strength in Places Project (81801): Smart Nano-Manufacturing Corridor. NK acknowledges the George Moor Professor Chair position (01.03.2020 - 01.03.2024).
Authors’ information
[Additional information] All correspondence can be directed to the corresponding author.
Author contributions
R.J. developed the framework and data processing, designing and executing the experimental results. NK, GP, and SB supervised the work with insights in data preparation and results analysis. R.J. prepared the original manuscript with the generation of figures and tables. All authors have reviewed and edited the manuscript.
Funding information
Not applicable.
Data availability
NeuCube software environment and the EEG case study dataset are kindly made available from Auckland University of Technology at: https://kedri.aut.ac.nz/neucube.
Declarations
Competing interests
The authors declare no competing interests.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Kasabov, NK. Time-space, spiking neural networks and brain-inspired artificial intelligence; 2019; Berlin, Springer: [DOI: https://dx.doi.org/10.1007/978-3-662-57715-8]
2. Kasabov, NK. Neucube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Netw; 2014; 52, pp. 62-76. [DOI: https://dx.doi.org/10.1016/j.neunet.2014.01.006]
3. Koprinkova-Hristova, P; Penkov, D; Nedelcheva, S; Yordanov, S; Kasabov, N. On-line learning, classification and interpretation of brain signals using 3d snn and esn. 2023 international joint conference on neural networks (IJCNN); 2023; New York, IEEE Press: pp. 1-6.
4. Hassan, IYA; Kasabov, NK. Neuden: a framework for the integration of neuromorphic evolving spiking neural networks with dynamic evolving neuro-fuzzy systems for predictive and explainable modelling of streaming data. Evolv Syst; 2025; 16,
5. Havlíček, V; Córcoles, AD; Temme, K; Harrow, AW; Kandala, A; Chow, JM; Gambetta, JM. Supervised learning with quantum-enhanced feature spaces. Nature; 2019; 567,
6. Vasques, X; Paik, H; Cif, L. Application of quantum machine learning using quantum kernel algorithms on multiclass neuron m-type classification. Sci Rep; 2023; 13,
7. Tomono, T; Natsubori, S. Performance of quantum kernel on initial learning process. EPJ Quantum Technol; 2022; 9,
8. Suzuki, Y; Yano, H; Gao, Q; Uno, S; Tanaka, T; Akiyama, M; Yamamoto, N. Analysis and synthesis of feature map for kernel-based quantum classifier. Quantum Mach Intell; 2020; 2, pp. 1-9. [DOI: https://dx.doi.org/10.1007/s42484-020-00020-y]
9. Prasad, G; Herman, P; Coyle, D; McDonough, S; Crosbie, J. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: a feasibility study. J NeuroEng Rehabil; 2010; 7, pp. 1-17. [DOI: https://dx.doi.org/10.1186/1743-0003-7-60]
10. Rathee, D; Chowdhury, A; Meena, YK; Dutta, A; McDonough, S; Prasad, G. Brain–machine interface-driven post-stroke upper-limb functional recovery correlates with beta-band mediated cortical networks. IEEE Trans Neural Syst Rehabil Eng; 2019; 27,
11. Gorur, K; Olmez, E; Ozer, Z; Cetin, O. EEG-driven biometric authentication for investigation of Fourier synchrosqueezed transform-ICA robust framework. Arab J Sci Eng; 2023; 48,
12. Ozturk, H; Eraslan, B; Gorur, K. Investigation of t-SNE and dynamic time warping within a unified framework for resting-state and minor analysis visual task-related EEG alpha frequency in biometric authentication: A detailed analysis. Digit Signal Process; 2025; 160, [DOI: https://dx.doi.org/10.1016/j.dsp.2025.105042] 105042.
13. Gandhi, V; Prasad, G; Coyle, DH; Behera, L; McGinnity, TM. Quantum neural network-based eeg filtering for a brain-computer interface. IEEE Trans Neural Netw Learn Syst; 2014; 25,
14. Nielsen, MA; Chuang, IL. Quantum computation and quantum information; 2002; Cambridge, Cambridge University Press:
15. Deutsch, D; Richard, J. Rapid solution of problems by quantum computation. Proc R Soc Lond A; 1992; 439, pp. 553-558.1196433 [DOI: https://dx.doi.org/10.1098/rspa.1992.0167]
16. Deutsch, D. Quantum theory, the church–Turing principle and the universal quantum computer. Proc R Soc Lond A; 1985; 400, pp. 97-117.801665 [DOI: https://dx.doi.org/10.1098/rspa.1985.0070]
17. Shor, PW. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J Comput; 1997; 26,
18. Biamonte, J; Wittek, P; Pancotti, N; Rebentrost, P; Wiebe, N; Lloyd, S. Quantum machine learning. Nature; 2017; 549, pp. 195-202. [DOI: https://dx.doi.org/10.1038/nature23474]
19. Schuld, SIM; Petruccione, F. An introduction to quantum machine learning. Contemp Phys; 2015; 56,
20. Peral-García, D; Cruz-Benito, J; García-Peñalvo, FJ. Systematic literature review: quantum machine learning and its applications. Comput Sci Rev; 2024; 51, 4694298 [DOI: https://dx.doi.org/10.1016/j.cosrev.2024.100619] 1552.68101 100619.
21. Blekos, K; Brand, D; Ceschini, A; Chou, C-H; Li, R-H; Pandya, K; Summer, A. A review on quantum approximate optimization algorithm and its variants. Phys Rep; 2024; 1068, pp. 1-66.4718366 [DOI: https://dx.doi.org/10.1016/j.physrep.2024.03.002]
22. Zhang, Y; Ni, Q. Recent advances in quantum machine learning. Quantum Eng; 2020; 2,
23. Liu, Y; Arunachalam, S; Temme, K. A rigorous and robust quantum speed-up in supervised machine learning. Nat Phys; 2021; 17,
24. Moradi, S; Brandner, C; Spielvogel, C; Krajnc, D; Hillmich, S; Wille, R; Drexler, W; Papp, L. Clinical data classification with noisy intermediate scale quantum computers. Sci Rep; 2022; 12,
25. Preskill, J. Quantum computing in the nisq era and beyond. Quantum; 2018; 2, [DOI: https://dx.doi.org/10.22331/q-2018-08-06-79] 79.
26. Gebhart, V; Santagati, R; Gentile, AA; Gauger, EM; Craig, D; Ares, N; Banchi, L; Marquardt, F; Pezzè, L; Bonato, C. Learning quantum systems. Nat Rev Phys; 2023; 5,
27. Peruzzo, A; McClean, J; Shadbolt, P; Yung, M-H; Zhou, X-Q; Love, PJ; Aspuru-Guzik, A; O’brien, JL. A variational eigenvalue solver on a photonic quantum processor. Nat Commun; 2014; 5,
28. Chen, SY-C; Wei, T-C; Zhang, C; Yu, H; Yoo, S. Quantum convolutional neural networks for high energy physics data analysis. Phys Rev Res; 2022; 4,
29. Mitarai, K; Negoro, M; Kitagawa, M; Fujii, K. Quantum circuit learning. Phys Rev A; 2018; 98,
30. Schuld, M; Bocharov, A; Svore, KM; Wiebe, N. Circuit-centric quantum classifiers. Phys Rev A; 2020; 101,
31. Mari, A; Bromley, TR; Izaac, J; Schuld, M; Killoran, N. Transfer learning in hybrid classical-quantum neural networks. Quantum; 2020; 4, [DOI: https://dx.doi.org/10.22331/q-2020-10-09-340] 340.
32. Chen, SY-C; Huang, C-M; Hsing, C-W; Kao, Y-J. An end-to-end trainable hybrid classical-quantum classifier. Mach Learn: Sci Technol; 2021; 2,
33. Koessler, L; Maillard, L; Benhadid, A; Vignal, JP; Felblinger, J; Vespignani, H; Braun, M. Automated cortical projection of eeg sensors: anatomical correlation via the international 10–10 system. NeuroImage; 2009; 46,
34. Taherkhani, A; Belatreche, A; Li, Y; Cosma, G; Maguire, LP; McGinnity, TM. A review of learning in biologically plausible spiking neural networks. Neural Netw; 2020; 122, pp. 253-272. [DOI: https://dx.doi.org/10.1016/j.neunet.2019.09.036]
35. Auge, D; Hille, J; Mueller, E; Knoll, A. A survey of encoding techniques for signal processing in spiking neural networks. Neural Process Lett; 2021; 53,
36. Petro, B; Kasabov, N; Kiss, RM. Selection and optimization of temporal spike encoding methods for spiking neural networks. IEEE Trans Neural Netw Learn Syst; 2019; 31,
37. Tu, E; Kasabov, N; Yang, J. Mapping temporal variables into the neucube for improved pattern recognition, predictive modeling, and understanding of stream data. IEEE Trans Neural Netw Learn Syst; 2016; 28,
38. Saeedinia, SA; Jahed-Motlagh, MR; Tafakhori, A; Kasabov, NK. Diagnostic biomarker discovery from brain eeg data using lstm, reservoir-snn, and neucube methods in a pilot study comparing epilepsy and migraine. Sci Rep; 2024; 14,
39. Kasabov, N; Scott, NM; Tu, E; Marks, S; Sengupta, N; Capecci, E; Othman, M; Doborjeh, MG; Murli, N; Hartono, R et al. Evolving spatio-temporal data machines based on the neucube neuromorphic framework: design methodology and selected applications. Neural Netw; 2016; 78, pp. 1-14. [DOI: https://dx.doi.org/10.1016/j.neunet.2015.09.011]
40. Cortes C. Support-vector networks. Machine Learning. 1995.
41. Lloyd S, Schuld M, Ijaz A, Izaac J, Killoran N. Quantum embeddings for machine learning. 2020. arXiv preprint. arXiv:2001.03622.
42. Schuld M. Supervised quantum machine learning models are kernel methods. 2021. arXiv preprint. arXiv:2101.11020.
43. Schuld, M; Killoran, N. Quantum machine learning in feature Hilbert spaces. Phys Rev Lett; 2019; 122,
44. Park J-E, Quanz B, Wood S, Higgins H, Harishankar R. Practical application improvement to quantum svm: theory to practice. 2020. arXiv preprint. arXiv:2012.07725.
45. Mengoni, R; Di Pierro, A. Kernel methods in quantum machine learning. Quantum Mach Intell; 2019; 1,
46. Dutta, SS; Sandeep, S; Nandhini, D; Amutha, S. Hybrid quantum neural networks: harnessing dressed quantum circuits for enhanced tsunami prediction via earthquake data fusion. EPJ Quantum Technol; 2025; 12,
47. Chicca, E; Whatley, AM; Lichtsteiner, P; Dante, V; Delbruck, T; Del Giudice, P; Douglas, RJ; Indiveri, G. A multichip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity. IEEE Trans Circuits Syst I, Regul Pap; 2007; 54,
48. Jha, RK; Kasabov, N; Coyle, D; Bhattacharyya, S; Prasad, G. Performance analysis of quantum-enhanced kernel classifiers based on feature maps: A case study on EEG-BCI data. International conference on neural information processing; 2024; Berlin, Springer: pp. 371-383.
49. Emotiv: Emotiv Website. 2013. http://www.emotiv.com.
50. NeuCube: NeuCube software. 2023. https://kedri.aut.ac.nz/neucube.
51. Kursa, MB; Jankowski, A; Rudnicki, WR. Boruta–a system for feature selection. Fundam Inform; 2010; 101,
52. Ramoser, H; Muller-Gerking, J; Pfurtscheller, G. Optimal spatial filtering of single trial eeg during imagined hand movement. IEEE Trans Rehabil Eng; 2000; 8,
53. Bergholm V, Izaac J, Schuld M, Gogolin C, Ahmed S, Ajith V, Alam MS, Alonso-Linaje G, AkashNarayanan B, Asadi A, et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations. 2018. arXiv preprint. arXiv:1811.04968.
54. Chicco, D; Jurman, G. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC Genom; 2020; 21, pp. 1-13. [DOI: https://dx.doi.org/10.1186/s12864-019-6413-7]
55. Quantum, IBM. IBM quantum platform; 2024; https://quantum.ibm.com/
56. IBM Quantum. Quantum documentation team: AerSimulator; 2025; https://qiskit.github.io/qiskit-aer/stubs/qiskit_aer.AerSimulator.html
57. Pivoluska, M; Plesch, M. Implementation of quantum compression on IBM quantum computers. Sci Rep; 2022; 12,
58. Russo, V; Mari, A; Shammah, N; LaRose, R; Zeng, WJ. Testing platform-independent quantum error mitigation on noisy quantum computers. IEEE Trans Quantum Eng; 2023; 4, pp. 1-18. [DOI: https://dx.doi.org/10.1109/TQE.2023.3305232]
59. Incudini, M; Grossi, M; Ceschini, A; Mandarino, A; Panella, M; Vallecorsa, S; Windridge, D. Resource saving via ensemble techniques for quantum neural networks. Quantum Mach Intell; 2023; 5,
60. Bravo-Montes, JA; Bastante, M; Botella, G; del Barrio, A; García-Herrero, F. A methodology to select and adjust quantum noise models through emulators: benchmarking against real backends. EPJ Quantum Technol; 2024; 11,
61. Thanasilp, S; Wang, S; Cerezo, M; Holmes, Z. Exponential concentration in quantum kernel methods. Nat Commun; 2024; 15,
62. Larocca M, Thanasilp S, Wang S, Sharma K, Biamonte J, Coles PJ, Cincio L, McClean JR, Holmes Z, Cerezo M. Barren plateaus in variational quantum computing. Nat Rev Phys. 2025;1–16.
63. Kasabov, NK. Stam-snn: Spatio-temporal associative memory in brain-inspired spiking neural networks: concepts and perspectives. Recent advances in intelligent engineering: volume dedicated to Imre J. Rudas’ seventy-fifth birthday; 2024; Berlin, Springer: pp. 1-12.
64. Kumarasinghe, K; Kasabov, N; Taylor, D. Brain-inspired spiking neural networks for decoding and understanding muscle activity and kinematics from electroencephalography signals during hand movements. Sci Rep; 2021; 11,
65. Kasabov NK, Tan Y, Doborjeh M, Tu E, Yang J, Goh W, Lee J. Transfer learning of fuzzy spatio-temporal rules in a brain-inspired spiking neural network architecture: a case study on spatio-temporal brain data. IEEE Trans Fuzzy Syst. 2023.
66. Jha RK. From quantum computing to quantum-inspired computation for neuromorphic advancement–a survey. Authorea Preprints. 2023. techrxiv preprint. https://doi.org/10.36227/techrxiv.24053250.v1
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.