Content area
We consider the problem of computing a sparse binary representation of an image. Given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of basis vectors that when added together best reconstruct the given input. We formulate this problem with an L2 loss on the reconstruction error, and an L0 loss on the binary vector enforcing sparsity. First, we solve the sparse representation QUBOs by solving them both on a D-Wave quantum annealer with Pegasus chip connectivity, as well as on the Intel Loihi 2 spiking neuromorphic processor using a stochastic Non-equilibrium Boltzmann Machine (NEBM). Second, using Quantum Evolution Monte Carlo with Reverse Annealing and iterated warm starting on Loihi 2 to evolve the solution quality from the respective machines. We demonstrate that both quantum annealing and neuromorphic computing are suitable for solving binary sparse coding QUBOs.
Introduction
The computation of a sparse binary reconstruction of an image is of importance whenever a full image cannot be directly observed, in which case it has to be reconstructed from a sample or projection using compressive sensing. Such scenarios occur, amongst others, in radioastronomy, molecular imaging, and image compression1,2.
Mathematically, we are given an overcomplete and non-orthonormal basis of vectors D = {D1, …, Dp}, and we aim to infer a sparse representation of a given image x. Here, an overcomplete set is defined as one that contains more functions than needed for a basis. All basis vectors and the image x are assumed to be of equal dimension . The task is to find the minimal set of non-zero activation coefficients a that accurately reconstruct x, where is a binary vector of length p for . We can express the computation of a sparse binary representation of the image x using the basis D as the minimization of the energy function
1
where ∥ ⋅ ∥2 is the Euclidean norm and ∥ ⋅ ∥0 denotes the number of nonzero elements. The operation Da denotes the addition of all basis matrices in D that correspond to an entry of 1 in a. The parameter λ > 0 is a Lasso-type penalization parameter controlling the sparseness of the solution3. A large value of λ results in a more sparse solution to Eq. (1), while smaller values yield denser solutions. Therefore, the parameter λ allows one to effectively balance the reconstruction error (the L2 norm) and the number of non-zero activation coefficients (the L0 norm). Since eq. (1) belongs to the class of 0-1 integer programming problems, finding a sparse representation falls into an NP-hard complexity class. The energy function of eq. (1) is non-convex and typically contains multiple local minima.We investigate two types of computing models to solve the sparse representation problem given in Eq. (1). The first is a D-Wave quantum annealer with chip ID
This article is a significant extension of ref.14, in which the unsupervised dictionary learning approach to generate sparse coding QUBO models was introduced for the purposes of sampling those problems on Loihi 1. Among others, this article investigates post-hoc normalization techniques to speed up computations on Loihi 1, optimal parameter selection, reconstruction errors, and unsupervised dictionary learning for Loihi 1 approaches and their classical counterparts. The article is structured as follows. We introduce the quantum annealing and neuromorphic computing technology we investigate in Section 4. Experimental results are presented in Section 2. The article concludes with a discussion in Section 3. Data generated by this study is publicly available15.
Literature review
The goal of any sparse coding method is to accurately recover an unknown sparse vector from a few noisy linear measurements. Unfortunately, this estimation problem is NP-hard in general (when formulated as 0-1 integer optimization), and it is therefore usually approached with an approximation method, such as some (C)LASSO (Constrained Least Absolute Shrinkage and Selection Operator) or orthogonal matching pursuit (OMP)16 where the penalty term in eq. (1) is an L1 () constraint. Because the coefficients are allowed to be any positive real number, the problem is turned convex, and a global minimum is achieved at the cost of trading off accuracy for less computational complexity.
A comparison of quantum computing (using a D-Wave quantum annealer) and neuromorphic computing (using the Intel Loihi spiking processor) has been investigated in refs. 17,18. The binary results of the D-Wave quantum annealer and the rate coded solutions motivated by ref. 19 to the L1 problem from Loihi 1 were examined with respect to various metrics (power consumption, reconstruction and classification accuracy, etc), which generally finds both the quantum annealing and neuromorphic processors to be reasonably effective at solving the respective underlying problem—however, exact state-of-the-art classical algorithms outperform both methods. This line of research is continued in ref. 20, who compare sparse coding solutions generated classically by a greedy orthogonal matching pursuit (OMP) algorithm with spike-based solutions obtained with the Loihi 1 neuromorphic processor.
A quantum-inspired algorithm for sparse coding is presented in refs. 21,22 by formulating a general sparse coding problem as a quadratic unconstrained binary optimization (QUBO) problem suitable for approximation on current hardware such as D-Wave quantum annealers. The QUBO formulations of refs. 21,22 were shown to be competitive with the standard (C)LASSO and OMP methods. Similar research on regularized linear regression for sparse signal reconstruction is presented in ref. 23, who also derive a QUBO representation and compare solutions obtained with D-Wave Advantage Systems 4.1 to OMP and conventional (C)LASSO methods, the results for which showed good recovery success rates using the quantum annealer that is comparable to conventional methods.
Two related problems, binary compressive sensing and binary compressive sensing with matrix uncertainty, are investigated in ref. 24; this study presents an Ising model formulation that has a ground state that is a sparse solution for the binary compressive sensing problem, though no experimental results on D-Wave devices are presented.
Results
Figure 1 shows the reference fMNIST image which will be reconstructed. Figure 2 shows the combined best image reconstruction of Fig. 1 from sampling the 16 QUBOs using the 4 different methods. We observe that Loihi 2 provides a better first solution and has slight improvement with warm starting, while standard forward annealing is initially very poor, but achieves better average final solutions when reverse annealing QEMC is used.
[See PDF for image]
Fig. 1
Reference fMNIST27 image that will be reconstructed using binary sparse coding.
[See PDF for image]
Fig. 2
Binary sparse coding image reconstructions of the original image shown in Fig. 1. Top left: The best (lowest energy) reconstruction found using D-Wave with standard forward annealing (with no parallel QA and an annealing time of 20 microseconds).
Bottom left: reverse quantum annealing chain of Monte Carlo-like iterations (QEMC). Top right: Single execution of neuromorphic computing on Loihi 2. Bottom right: Iterated warm-starting neuromorphic computing with Loihi 2. Combined, the bottom row of plots shows that the two iterative approaches yield better solution quality than that non iterative improvement methods shown in the top row. All experimentally computed figure reconstructions are the best mean energy and best mean sparsity across the 16 QUBO models (for the best parameter combination found for each device or technique).
Figure 3 presents the iterated warm-started learning on Loihi 2 for all 16 of the 64 decision variable QUBO models. Interestingly, this process of warm starting the simulation using the state found during the previous computation seems to converge fairly fast to a local minimum for most of the QUBO problems. In these plots, each iteration is showing the minimum energy obtained from a set of 300 samples obtained during the 6000 time step simulation (at intervals of 20 time steps). Notably, 4 of the plots show high variability of the learning curve for those QUBOs, whereas for most of the other QUBOs there is very little variability of the learning curve. Figure 3 also plots the minimum energies computed using CPLEX. These results are the first that we are aware of for using iterated warm starting on a spiking neuromorphic processor.
[See PDF for image]
Fig. 3
Iterated warm starting results on Loihi 2 for 100 iterations for each of the 16 QUBO problems (each QUBO contains 64 variables).
Each panel shows the results for one of the 16 QUBO problems. The minimum energy sample from the converged simulation is plotted at each iteration step.
Figure 4 shows the QEMC, or iterated reverse annealing, progression when sampling all of the 16 sparse coding QUBOs using 100 iterations and varying anneal fractions at which the pause occurred. At each QEMC step, the minimum energy (i.e., the objective function evaluation) sample is plotted. Figure 4 shows that there are some anneal fractions (s) that produced the desired monotonic decrease of minimum computed energy at each QEMC step, whereas other anneal fractions did not consistently improve on solution quality. Specifically, the anneal fraction of s = 0.44 clearly introduced too much of the transverse field Hamiltonian into each simulation, resulting in a loss of information on the best previous obtained solution. On the other hand, the choice s = 0.6 did not utilize enough of the quantum fluctuations provided by the transverse field Hamiltonian to search for lower energy states, although the improvement was always monotonic. The anneal fractions of s = 0.5 and 0.55 showed the best tradeoff in our experiments, allowing for reasonably good exploration of the problem search space. Figure 4 shows that the RA QEMC method was able to find the optimal solution for some, but not all of the QUBO problems. This is examined in more detail in Table 1. Figure 4 plots the minimum energy computed by simulated annealing as black horizontal dashed lines, and Table 1 shows that those solutions are also optimal as verified by CPLEX. When executing the QEMC simulations of Fig. 4, the initial state was chosen to be the best sample found from an initial standard linear schedule forward anneal; that initialization is then repeated with independent anneals for each run of QEMC, meaning that the initial states all have slightly different objective values. If there were multiple solutions with the same objective function cost, one was chosen arbitrarily as the initial vector.
[See PDF for image]
Fig. 4
Quantum Evolution Monte Carlo with reverse annealing convergence.
Each sub-plot shows the QEMC convergence for each of the 16 separate, 64-variable QUBOs that are the set of patch sub-problems. The anneal fraction (and thereby the transverse field proportion relative to the diagonal problem Hamiltonian) at which the reverse anneal chain is paused is varied, and the resulting minimum energy found in the 1000 samples at each Monte Carlo step is plotted. All quantum annealing computations used an annealing time of 100 microseconds.
Table 1. Solution quality summary from the different solvers used footnote.
QUBO index | Ground state energy (CPLEX) | Optimal solution sparsity (CPLEX) | SA min. energy | SA min. energy count | QA QEMC min. energy | QA QEMC min. energy count | Loihi 1 min. energy | Loihi 2 iterated warm starting min. energy |
|---|---|---|---|---|---|---|---|---|
0 | −29.016 | 6/64 | −29.016 | 826/1000 | −29.016 | 394186/400,000 | −26.308 | −28.0741 |
1 | −23.4522 | 12/64 | −23.4522 | 25/1000 | −23.2522 | 47038/400,000 | −10.624 | −23.2522 |
2 | −24.1331 | 9/64 | −24.1331 | 44/1000 | −23.9252 | 20165/400,000 | −15.784 | −23.4876 |
3 | −29.5586 | 8/64 | −29.5586 | 117/1000 | −29.5586 | 572222/400,000 | −29.206 | −29.4979 |
4 | −22.1755 | 10/64 | −22.1755 | 95/1000 | −21.6989 | 14738/400,000 | −15.369 | −21.8805 |
5 | −15.6869 | 10/64 | −15.6869 | 26/10000 | −15.6869 | 14816/400,000 | −13.188 | −14.9114 |
6 | −16.8891 | 9/64 | −16.8891 | 123/1000 | −16.8891 | 10487/400,000 | −12.744 | −15.8988 |
7 | −23.9319 | 9/64 | −23.9319 | 103/1000 | −23.9319 | 342411/400,000 | −15.602 | −23.6264 |
8 | −17.416 | 10/64 | −17.416 | 19/1000 | −17.3972 | 11835/400,000 | −13.432 | −16.7931 |
9 | −20.2117 | 7/64 | −20.2117 | 45/1000 | −19.591 | 16110/400,000 | −16.036 | −18.6999 |
10 | −21.703 | 7/64 | −21.703 | 41/1000 | −20.7899 | 49428/400,000 | −18.182 | −20.2913 |
11 | −18.2355 | 11/64 | −18.2355 | 13/1000 | −17.2995 | 13251/400,000 | −8.9072 | −16.4587 |
12 | −27.2012 | 10/64 | −27.2012 | 97/1000 | −26.7890 | 24604/400,000 | −21.610 | −26.621 |
13 | −21.0627 | 13/64 | −21.0627 | 66/1000 | −20.6227 | 59717/400,000 | −8.747 | −20.2434 |
14 | −21.254 | 11/64 | −21.254 | 29/1000 | −21.2437 | 90453/400,000 | −8.771 | −20.5983 |
15 | −27.9347 | 5/64 | −27.9347 | 266/1000 | −27.9347 | 14957/400,000 | −26.593 | −27.9347 |
Optimal solutions and the best solutions found by simulated annealing, quantum annealing, and Loihi 2 for each of the 16 QUBO problems (each QUBO contains 64 variables). Objective function evaluations (e.g., energies) are rounded to a precision of 4 decimal places. The optimal solution sparsity is a fraction out of 64, which shows how many of the variables are in a 1 state. The RA QEMC sampling ground state energy success proportion is out of all samples (including parallel QA samples) across all parameter choices that were tested (specifically, the 4 different anneal fractions at which the pause was set).
The ordering of the sub-plots (in terms of the QUBO problem being solved) in Fig. 3 and Figure 4 is consistent between the two figures, thus facilitating a comparison. Notably, all of the iterated reverse annealing plots of Fig. 4 show very clear and consistent convergence behavior (e.g., for s = 0.5), but the iterated warm starting on Loihi 2 does not have as clear convergence behavior as a function of the iteration index.
Table 1 also reports the sparsity of the optimal bitstring computed by CPLEX. Simulated annealing always correctly found the optimal solution. Loihi 1 found the optimal solution for 0 out of the 16 QUBOs, so the optimal solution count is not shown. The Loihi 1 results are shown for comparison against Loihi 2, and are taken from the simulations in ref.14, where the minimum energies are taken from the distribution of 2000 samples computed across a varied set of device parameters. The minimum energies reported both from iterated reverse quantum annealing and iterated warm starting on Loihi 2 are the minimum energies found from the entirety of the simulation (which, for converged simulations, is the last point found after the 100 iterations). The reported minimum energies from Loihi 2 are taken from the distribution of 100 samples obtained from the iterated warm starting procedure.
Table 1 verifies that the trained sparse coding QUBOs indeed have solution vectors that are reasonably sparse. However, although unlikely to occur for our types of QUBO models, there could exist multiple variable assignments that give the same optimal objective function value (i.e., degenerate ground states). Moreover, as CPLEX only returns one optimal solution, the reported CPLEX results do not capture the case of more than one optimal solution with different sparsity. We do not report compute time in Table 1 because it is relatively stable across different problem instances: the mean QPU time to obtain 1000 samples on the D-Wave quantum processor is 1.3 seconds, the mean wall-clock time to obtain 300 samples on the Loihi 2 processor (this is 300 samples taken during a single simulation sweep where we are measuring every 20 time steps within the total simulation duration of 6000 time steps) is 6.7 seconds for the randomly initialized first iteration, and is 5.4 seconds for each subsequent warm started iteration.
Binary sparse coding on 128 variables with QEMC
Figure 5 shows the reverse annealing QEMC protocol applied to 128 variable binary sparse coding QUBOs. These 128 variable QUBOs are generated using the same fMNIST image as the 64 variable QUBO partitioning reconstruction, but in this case, the image is partitioned into 4 QUBO models instead of 16. The disjoint minor embeddings used for the 128 RA QEMC procedure are shown in Fig. 8, where only 2 of the all-to-all random minor embeddings could be tiled. In Fig. 5, we see that QEMC fails to reach the solutions that simulated annealing finds. This is notable because it shows a very different performance compared to the 64 variable QUBO problems. However, these results are consistent with the properties of minor embeddings with necessarily longer chains (in particular, resulting in higher chain break rates), especially considering the encoding precision requirements for these binary sparse coding QUBO problems.
[See PDF for image]
Fig. 5
Quantum Evolution Monte Carlo with reverse annealing for 128 variable QUBOs.
Each line is plotting the Monte Carlo simulation result minimum energy found (out of the 1000 samples measured at each iteration) for different anneal fractions s at which the anneal is (symmetrically) paused at during reverse annealing.
Both Figs. 4 and 5 show that iterating reverse annealing, applied to these fully connected QUBOs, significantly improves the found solutions. In particular, when a good anneal fraction is used for the reverse anneal pause, the solution quality monotonically improves. This is significant to see for such large minor embedded optimization problems. The effectiveness of iterated reverse annealing, or QEMC, has not been empirically demonstrated for minor embedded systems of this size, as far as we are aware.
Like Figs. 4, 5 plots the best solution found when executing simulated annealing on the problem QUBOs; and like Fig. 4, the simulated annealing solutions are also the optimal solutions found by CPLEX. In other words, the dashed horizontal black line in Fig. 5 denotes the optimal energy of, or cost value, of these 128 variable QUBOs.
The best anneal fraction, which tradeoffs between exploration of new solutions and retaining memory of the initial spin state, is usually either 0.5 or 0.55 – this is true for both Figs. 4 and 5. Notably, using a reverse annealing pause at s = 0.4 causes the Monte Carlo chain to not converge, but rather to oscillate approximately at the energy of the starting state at the beginning of the process.
Discussion
In this article we have demonstrated that moderately sized binary sparse coding QUBO models can be sampled using both quantum annealing and neuromorphic computing technologies. These binary sparse coding QUBO problems were constructed using an un-supervised and un-normalized dictionary learning approach. We have found that the iterated reverse quantum annealing technique, implemented on a D-Wave quantum annealer, performed better than standard forward annealing. Similarly, iterated warm starting on Loihi 2 improved the solution quality of sampling the QUBOs. However, this energy improvement was marginal. We also found that the Loihi 2 sampling was better than the sampling done on Loihi 114. Notably, a consistent trend for the Loihi processor(s) sampling is that the generated samples tend to be quite sparse—more sparse than sub-optimal sampling from standard simulated annealing or quantum annealing.
Amongst others, the following reasons might have impeded the forward annealing and reverse annealing sampling from reaching the global minimum in our experiments. The first, and almost certainly the most important, is the coefficient precision limits that can be programmed on current D-Wave devices. The sparse coding QUBOs have quadratic terms which are defined with high precision, and are nearly all very close to zero (see Fig. 6). It is very likely that such sparse coding QUBO coefficients require higher precision than what the D-Wave hardware supports. Note that the precision limits on the hardware are not a well defined bit precision, but rather are influenced by a number of factors on the analog hardware, such as asymmetric digital-to-analog converter quantization, making the exact precision limits hard to know for all problem types and simulation settings25. The next obstacle is the use of a minor embedding. A minor embedding is necessary for implementing general QUBO problems on D-Wave, however, such embeddings have ferromagnetic chains which dominate the energy scale programmed on the chip, therefore not allowing a larger range of the logical problem coefficients to be effectively used in the computation. Additionally, although majority vote unembedding is used, chain breaks that are present in the samples represent likely sub-optimal solution quality. Analog control errors when the hardware is representing the diagonal Hamiltonian on the physical machine can also lead to errors in the simulation. Finally, interaction with the environment causes the system to decohere and noise to be introduced into the simulation.
[See PDF for image]
Fig. 6
Binary sparse coding QUBO model coefficient matrices.
Colormaps of the full QUBO symmetric matrix for a (randomly chosen and representative) sparse coding QUBO with 64 variables (left) and 128 variables (right). Each heatmap corresponds to the coefficient for each of the linear terms (on the diagonal) and for each of the quadratic terms (on the off-diagonals). The axis labels correspond to the matrix row and column indices.
For future research, it would be interesting to be able to formulate sparse coding QUBO problems that match a desired connectivity graph. This would be particularly useful for generating hardware-compatible QUBOs for D-Wave hardware graphs, as it would allow one to considerably scale up the size of QUBO problems that can be solved (up to the size of the entire hardware graph, which has over 5000 qubits). In particular, probing the scaling of solution quality for future generations of both neuromorphic processors and quantum annealing processors would be of interest to compare against existing classical optimization algorithms. In general, creating significantly larger sparse coding QUBOs using un-normalized dictionary learning14, and then evaluating methods to find good solutions to those larger QUBOs, would be of interest for future studies.
Methods
In this section we present the hardware being used in this research. After some mathematical considerations on the problem under investigation (Section Binary Sparse Coding QUBOs), we give details on Intel’s Loihi 2 neuromorphic chip (Section Implementation on Loihi), followed by some implementation details for the D-Wave quantum annealer (Section Quantum Annealing Implementation). Section Exact QUBO Solutions Using CPLEX describes the classical simulated annealing implementation which we will compare against. Section 4.6 discusses the exact solution obtained with IBM’s CPLEX solver26, which is exact and deterministic.
Binary sparse coding QUBOs
For both Intel’s Loihi 2 neuromorphic chip as well as the D-Wave quantum annealer, the problem being solved has to be given as a QUBO problem. In this formulation, the observable states of any variable are 0 and 1. During the solution process, both technologies can be described as effectively treating each spin/neuron in some probabilistic combination of 0 and 1, in which the variable is both active and non-active at the same time. After the quantum annealing process or the firing process in the neural net is complete, each variable will return to one of the classical states 0 or 1.
We start by reformulating Eq. (1) in QUBO form. First, we expand out the objective function,
2
where a = (a1, …, ap). Next, we take the derivative with respect to one component of a,3
As expected, multiplying out Eq. (1) yields a quadratic form in a, and taking the derivative yields an expression in terms of a. Hence, we can recast our objective function as a QUBO problem. For this, we define the following two transformations:4
5
Using eq. (4) and eq. (5), where represents the self interaction term and hence can be absorbed into the linear term, we can rewrite Eq. (1) as the following QUBO,6
which is now in suitable form to be solved on either the Loihi 2 neuromorphic chip or the D-Wave quantum annealer. Figure 6 shows a visual representation of both a 64 and 128 variable sparse coding QUBO as a symmetric square matrix, where the diagonal terms are the linear terms of Eq. (6), and the off-diagonal terms are the quadratic variable interactions. The coefficients for each term are encoded using a colormap. Figure 6 shows that the linear terms are typically strongly positively or negatively weighted compared to the off diagonal terms.The binary sparse coding QUBOs are constructed using the un-normalized dictionary learning introduced in ref.14. In particular, the dictionary learning is performed using the classical heuristic simulated annealing using the settings that will be described in Section “Simulated Annealing Implementation”. As done previously14, a single image from the standard fashion MNIST (fMNIST) data set27 is used, where the 28 × 28 image is divided into 16 7 × 7 patches, and then a dictionary size of 64 variables is used for each of the 16 patches. We also evaluate a version where we increase the dictionary size to 128 variables.
Unsupervised dictionary learning
Sparse coding optimization can be viewed as a two-step process. First, a dictionary of basis vectors is learned in an unsupervised manner using a local Hebbian rule. When solving the convex (C)LASSO problem, it is typically necessary to re-normalize the columns of the dictionary D after each learning epoch. This normalization is essential for convergence in (C)LASSO because the entries of the sparse vector a can take on any real value.
Here, we use a learning procedure presented in ref. 14 that enables the algorithm to determine the optimal feature norms based on a specified target average sparsity level, s ∈ (0, 1). The dictionary is initialized with features drawn from a normal distribution, ensuring their norms are randomly below 1, along with a small sparsity penalty parameter λ. After solving the binary sparse coding problem for each sample in the training set, the dictionary is updated. If the average sparsity across the training epoch exceeds the target s, the penalty parameter λ is increased for the next epoch until the average reconstruction error and sparsity converge.
Loihi 2 neuromorphic chip implementation
The Intel Loihi 2 neuromorphic processor is designed to replicate core elements of biological neural structure and processing, thereby targeting specific classes of computational tasks. Compared to its predecessor, Loihi 1, Loihi 2 offers several enhancements, including improved manufacturing processes, more connectivity, faster processing, and lower energy consumption. Each Loihi 2 chip contains roughly one million neurons and approximately 120 million synapses.
Loihi hardware employs binary spike events, wherein neurons communicate through digital pulses that can be either “on” or “off” (1 or 0). This binary scheme mirrors the all-or-nothing firing patterns observed in biological neurons, though with lower granularity.
Whereas the Loihi 1 architecture has been utilized with the Stochastic Constraint Integrate and Fire (SCIF) neuron model14,28 to solve QUBO problems, the expanded capabilities of Loihi 2 support more sophisticated models. One such model is the Non-equilibrium Boltzmann Machine (NEBM) neuron model, available in Intel’s lava software package and specifically designed for optimization tasks such as Quadratic Unconstrained Binary Optimization (QUBO). Drawing on principles of statistical mechanics and thermodynamics-particularly the concept of a non-equilibrium steady state29-the NEBM model departs from traditional approaches by operating under continuously driven conditions rather than settling into thermal equilibrium.
In contrast to equilibrium-based Boltzmann machines, which treat neurons as binary units obeying a Boltzmann distribution, NEBM introduces dynamics influenced by external driving forces. This design encourages more extensive exploration of the solution space and offers a mechanism to escape local minima. Additionally, NEBM does not conserve energy in the traditional sense; it instead dissipates energy due to the open, driven nature of the system. This dissipation is essential for creating an effective annealing process, wherein the system is gradually “cooled” to stabilize into low-energy configurations that correspond to high-quality solutions. Unlike strictly equilibrium-based models, NEBM inherently incorporates time-dependent behavior, allowing for dynamic update rules that can be tuned for faster convergence or more thorough exploration.
When applying NEBM to a QUBO problem, each variable in the QUBO formulation is mapped onto a corresponding neuron, and the pairwise interactions defined by the QUBO matrix translate directly into synaptic connections. In a manner analogous to simulated annealing, NEBM orchestrates a progressive reduction of external driving influences, guiding the network toward low-energy states. The non-equilibrium nature of the model ensures that the system remains driven by both internal neuronal processes and external inputs throughout its evolution. Ultimately, this paradigm is well-suited for deployment on neuromorphic platforms such as Loihi 2, where spiking neural networks can efficiently implement these dynamic, non-equilibrium processes to solve complex optimization problems.
State variables and parameters
Each NEBM neuron i maintains at least the following state variables:
vi(t): The membrane potential at time step t.
ai(t): The binary {0, 1} spike output at time t.
T(t): The system temperature (or inverse temperature β(t) ≡ 1/T(t)) follows an annealing schedule.
ri(t): A refractory state or counter to modulate firing behavior over time.
α, ρ, κ, γ, θ: Hyperparameters controlling decay factors, refractory penalties, noise scales, and thresholds.
Membrane potential update
The discrete-time update for the membrane potential is:
7
where:α ∈ [0, 1) is a decay constant ensuring vi does not grow unbounded.
Qij is the weight from neuron j to i, derived from the QUBO matrix Q.
hi is the bias term for neuron i, from the vector h.
γ ηi(t; T) represents noise injection of a particular distribution, scaled by γ. The temperature T(t) typically controls the noise variance or amplitude.
Spiking/stochastic “Boltzmann-like” activation
After updating the membrane potential, the neuron generates an output spike si(t + 1). We denote vi(t + 1) as the updated membrane potential for neuron i at time t + 1. Although a deterministic threshold is available, we chose a biologically inspired and exploration-friendly approach to compute the spike output stochastically:
8
where σ(⋅) is the logistic function, and T(t) is the temperature. The higher the membrane potential (relative to temperature), the greater the probability of spiking.Refractory and reset dynamics
A refractory mechanism helps mimic annealing and prevents excessive firing. Our approach is:
9
10
where:ρ ∈ (0, 1) is a decay factor for the refractory state ri.
κ is a refractory scaling term that reduces the membrane potential based on recent spiking.
Annealing schedule
To help the network converge on low-energy solutions, the temperature T decreases over time:
11
where is the initial temperature and ΔT is a constant decrement (for a linear schedule). As T lowers, the system becomes more likely to settle into low-energy (potentially optimal) configurations rather than continuing to hop among many states.Implementation on Loihi
This section guides through several implementation details on Intel’s Loihi 2 spiking neuromorphic chip.
The exact hyperparameters available for use to run Loihi 2 are as follows. The neuron model is set to
An additional technique that we introduce is iterated warm starting on Loihi 2; we initialize with a standard run, then take the best found solution in that run and initialize the next run with that solution. This is then repeated a total of 100 times. The initial run is initialized with a random binary vector. As far as we are aware, this is the first description and assessment of using warm starting iteratively on a spiking neuromorphic processor.
Despite Loihi 2’s advancements, several factors can lead to non-ideal performance when solving QUBO problems with the NEBM model. First, hardware constraints such as limited on-chip memory and communication bandwidth can restrict the size or connectivity of the problem that can be efficiently mapped, which may result in suboptimal partitioning or approximation of the underlying QUBO. Second, the discrete nature of spiking dynamics and the complexity of tuning neuron and synapse parameters mean that finding the right hyperparameters (e.g., annealing schedules, refractory settings) often involves extensive trial and error, leading to potential inefficiencies. Third, real-time spike communication overhead-particularly if signals must be routed across multiple cores or chips-can slow the effective convergence rate. Fourth, noise and variability inherent in spiking hardware may occasionally cause the system to settle in unfavorable local minima, especially for large-scale or high-dimensional problems. Finally, while non-equilibrium approaches can explore more of the solution space, they also rely on carefully calibrated driving forces, and any inaccuracies or mismatches in these driving inputs can impede overall performance. These factors, taken together, highlight that while Loihi 2 shows promise for neuromorphic optimization, further refinements in hardware design, software tooling, and parameter-tuning methodologies are critical to unlocking its full potential.
Quantum annealing implementation
This section gives some background on the D-Wave quantum device we employ to minimize the sparse coding QUBO of Eq. (6).
Quantum annealing is a type of analog quantum computation that generally has the goal of computing low energy states in a physical system that correspond to low energy states of a classical spin system (i.e., diagonal Hamiltonian)4,5,30, 31–32. A review of quantum annealing can be found in ref. 33. D-Wave manufactures quantum annealers using superconducting flux qubits6,7, several generations of which have been utilized in a large number of studies of different optimization problems and physical spin systems34, 35, 36, 37–38.
The D-Wave quantum annealer is designed to minimize QUBO problems, given by functions of the form of Eq. (6). In Eq. (6), the linear terms and the quadratic couplers are chosen by the user to define the problem under investigation. The variables are binary and unknown. The aim is to find a configuration of a1, …, ap which minimizes Eq. (6).
Quantum annealing is based on adiabatic quantum computation4,5,30, 31–32, where the system is initialized in the ground state of a Hamiltonian that is easy to prepare, in the case of D-Wave this is the Transverse field Hamiltonian. The system is then slowly transitioned to a (diagonal) Hamiltonian of interest, whose ground state is unknown (and, likely hard to find). In ideal adiabatic settings, namely if the transition is sufficiently slow, the system will remain in the ground state, leading to the ground state of the final Hamiltonian. Therefore, this can be used as a method of computing ground states of spin systems (Ising models). The system is described as
12
where A(t) and B(t) define the anneal schedule, Hinit is the Transverse field Hamiltonian, and HIsing is the problem Ising model that we wish to find the optimal solution of. Ising models can easily be converted into QUBOs, and vice versa, and since many optimization problems can be formulated as QUBOs, quantum annealing can be used to sample solutions of a large number of optimization problems39,40.The physical spin system that the programmable quantum annealers created by D-Wave implement is defined specifically as
13
where s ∈ [0, 1] is a variable called the anneal fraction which defines a weighting of both the transverse field and problem Hamiltonians (by A(s) and B(s)) at each point in time during the anneal, is the Pauli matrix acting on each qubit i, and HIsing is the problem Ising model that we wish to sample low energy states from. In our case, HIsing is the QUBO of eq. (6) that has been converted to an Ising model such that all of the variable states are spins. The user can program a variety of additional features on the device, including different anneal schedules and annealing times.The other D-Wave device feature we use is reverse quantum annealing, see Section Quantum Evolution Monte Carlo with Reverse Annealing. Reverse quantum annealing uses a modified anneal schedule where we initialize the system in a classical spin configuration (each variable is either spin up or spin down). Then, we program an anneal schedule that varies the anneal fraction s such that some proportion of the Transverse field Hamiltonian is introduced into the system, up until the system state is measured (at which point, the anneal fraction must be s = 1).
The workflow for using the D-Wave quantum annealer is described in the following sections. All quantum annealing results presented in this article were computed on the D-Wave
[See PDF for image]
Fig. 7
Calibrated anneal schedule for the D-Wave quantum annealing processor used in this study.
The exact A(s) and B(s) values are plotted in units of GHz as a function of the anneal fraction s ∈ [0, 1].
Parallel quantum annealing minor embeddings
D-Wave
Normally, the connectivity graph of the logical qubits in Eq. (6) does not match the one of the hardware qubits on the D-Wave QPU. In this case, a minor embedding of the problem onto the hardware architecture is necessary. The idea is to represent logical variable states by a path of qubits on the hardware graph (that are connected on the hardware graph), where the couplers between those qubits are bound with ferromagnetic links so that their spin states are incentivized to be the same (specifically, they are penalized via an energy penalty if the spins disagree)37,43,45, 46, 47–48. These paths of physical qubits are called chains, and they are intended to encode the logical variable state of one of the variables in the original QUBO problem.
In order to make more effective use of the large hardware graph of D-Wave’s
[See PDF for image]
Fig. 8
Parallel quantum annealing fully connected minor embeddings on the hardware graph of
Seven disjoint minor embeddings of size 64 variables with all-to-all connectivity (left), and two disjoint minor embeddings of size 128 variables with all-to-all connectivity (right) on the Pegasus P16 hardware graph of the
Choice of D-wave hardware parameters
With newer generations of the D-Wave quantum annealers, more and more features have been added which give the user a greater control over the anneal process. The specific parameters being used are listed in this section.
One necessary consequence of the minor embeddings is the presence of chains, that is the representation of a logical qubit as a set of physical hardware qubits on the chip. However, after annealing, it is not guaranteed that all the physical qubits in a chain take the same value (either zero or one), although they technically represent the same logical qubit. Such a chain is called “broken”. To arrive at a value for the logical qubit in Eq. (6), we used the majority vote chain break resolution algorithm37.
We employ the D-Wave annealer with an annealing time of 100 microseconds, and we query 1000 samples per D-Wave backend call. To compute the chain strength, we employ the uniform torque compensation feature with a UTC prefactor of 0.6. The UTC computation, given a problem QUBO, attempts to compute a chain strength which will minimize chain breaks while not too greatly disrupting the maximum energy scale programmed on the device53.
For all experiments involving reverse annealing, the schedule used is {[0, 1], [10, s], [90, s], [100, 1]}, where each tuple defines a point in time (from the start to the end of the anneal process) and an anneal fraction s. The anneal fraction is the normalized time used to control how the quantum Hamiltonian is moved from the initial superposition of states to the problem QUBO (which is a classical Hamiltonian) during the anneal54. The anneal schedule is constructed by linear interpolation between those four points. The reverse anneal schedule we use is symmetric with a pause duration of 80 microseconds. It has an increasing and decreasing ramp on either side of a pause of duration 10 microseconds. We vary the anneal fraction s at which the pause occurs.
Moreover, we called the D-Wave quantum annealer with flag reduce_intersample_correlation enabled for all experiments, which adds a pause in between each anneal so as to reduce self correlations between the anneals (those correlations may exist in time due to the spin bath polarization effect, see ref. 55). Both parameters readout_thermalization and programming_thermalization were set to 0 microseconds. The reverse annealing specific parameter reinitialize_state was enabled for all reverse annealing executions, causing the annealer to reapply the initial classical state after each anneal readout cycle56.
Quantum evolution Monte Carlo with reverse annealing
We will use the D-Wave quantum annealer to compute a Monte Carlo chain of reverse quantum anneal sequences, in which the best solution of an initial set of anneal-readout cycles is encoded as the initial state of the next anneal, and then the best solutions found from each iteration are used as the initialized classical spin state of the subsequent (reverse) anneals. This process works on the logical problem, meaning after unembedding of all chained qubits.
To be precise, a sequence of reverse anneals is chained together in a Monte Carlo like process, where each subsequent round of reverse anneals is initialized with a classical state that is defined by the best solution found at the last set of reverse anneals. This chain of reverse anneals is initialized with the best solution found from a 100 ms forward anneal with 1000 samples (qubit measurements) and the standard linear anneal schedule (forward anneal denotes that it is a standard annealing process, initialized in a uniform superposition and not in a specific configuration of spins). Because of the use of parallel quantum annealing, there is a multiplier on the total number of samples obtained at each QEMC step; for the 64 variable QUBO problems we actually obtain 70,000 samples per 1000 sample anneal readout cycle, and for the 128 variable QUBO problems we obtain 2000 samples per 1000 sample anneal readout cycle. Each reverse annealing step in the chain uses the parameter
Simulated annealing implementation
Simulated annealing is a classical heuristic algorithm which samples combinatorial optimization problems quite effectively65. The QUBO models described in Section 4.1 can directly be sampled using a standard simulated annealing implementation, in this case we use the
Specifically, we use 1000 reads per QUBO, while all other settings are set to their default values. This means that the annealing schedule is set to
Exact QUBO solutions using CPLEX
In order to determine how well all of the evaluated heuristic methods worked at sampling the optimal solution(s) of the generated QUBO models, we used the classical CPLEX26 solver which can solve the generated QUBOs exactly. CPLEX is a commercial mathematical optimization software. To this end, we first transform each QUBO into a Mixed Integer Quadratic Programming (MIQP) problem, and solve those problems exactly using the CPLEX solver26. CPLEX deterministically finds and then verifies the optimal solution of a combinatorial optimization problem. Software such as CPLEX are used in the context of either generating good heuristic solutions within a compute time budget, or using as much compute time as is required to verify optimality of a solution – in our case we use the later (i.e., no time limit) in order to compute the optimal energy (ground state energy) for these QUBO problems. CPLEX, although not as performant as new optimization software such as Gurobi, is a very standard classical benchmark for (general) classical optimization techniques. The CPLEX solver does not provide information on whether the QUBOs have multiple optimal solutions, or only 1, however, it is expected that in general these QUBOs will not have degenerate ground states. We leave further investigations on the ground state degeneracy of these binary sparse coding QUBO problems to future research.
Acknowledgements
This work was supported by the U.S. Department of Energy through Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (under the contract No. 89233218CNA000001). The research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under the project numbers 20220656ER and 20240032DR. This research used resources (D-Wave Leap Cloud access) provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. Research presented in this article was supported by the NNSA’s Advanced Simulation and Computing Beyond Moore’s Law Program at Los Alamos National Laboratory. We gratefully acknowledge support from the Advanced Scientific Computing Research (ASCR) program office in the Department of Energy’s (DOE) Office of Science, award #77902. This research used resources provided by the Darwin testbed at Los Alamos National Laboratory (LANL) which is funded by the Computational Systems and Software Environments subprogram of LANL’s Advanced Simulation and Computing program (NNSA/DOE). This paper has been assigned LANL report number LA-UR-23-26361.
Author contributions
G.H., K.H., and E.P. wrote the main manuscript text. All authors reviewed the manuscript text. E.P. and K.H. executed all experiments and simulations and prepared all of the plots. All authors contributed to the experimental design and methodologies presented in the manuscript.
Data availability
Data is publicly available on Zenodo at doi: 10.5281/zenodo.14735999.
Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Ting, M., Raich, R. & Hero, A. Sparse image reconstruction using sparse priors. In International Conference on Image Processing, Atlanta, GA, USA, 1261–1264 (2006).
2. Mohideen, R; Peter, P; Weickert, J. A systematic evaluation of coding strategies for sparse binary images. Signal Process. Image Commun.; 2021; 99, 116424. [DOI: https://dx.doi.org/10.1016/j.image.2021.116424]
3. Tibshirani, R. Regression Shrinkage and Selection Via the Lasso. J. R. Stat. Soc. B Met.; 1996; 58, pp. 267-288.1379242 [DOI: https://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x]
4. Kadowaki, T; Nishimori, H. Quantum annealing in the transverse Ising model. Phys. Rev. E; 1998; 58, pp. 5355-5363.1998PhRvE.58.5355K1:CAS:528:DyaK1cXntFGmsr0%3D [DOI: https://dx.doi.org/10.1103/PhysRevE.58.5355]
5. Farhi, E., Goldstone, J., Gutmann, S. & Sipser, M. Quantum computation by adiabatic evolution. ArXiv:quant-ph/0001106. (2000).
6. Johnson, MW et al. Quantum annealing with manufactured spins. Nature; 2011; 473, pp. 194-198.2011Natur.473.194J1:CAS:528:DC%2BC3MXlvFGmurw%3D [DOI: https://dx.doi.org/10.1038/nature10012] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21562559]
7. Lanting, T. et al. Entanglement in a quantum annealing processor. Phys. Rev. X4, https://doi.org/10.1103/physrevx.4.021041 (2014).
8. King, A. D. et al. Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets. Nat. Commun.12, 1113 (2021).
9. Tasseff, B et al. On the emerging potential of quantum annealing hardware for combinatorial optimization. J. Heuristics.; 2024; 30, pp. 325-358. [DOI: https://dx.doi.org/10.1007/s10732-024-09530-5]
10. King, A. D. et al. Computational supremacy in quantum simulation 2403.00910 (2024).
11. Bauza, H. M. & Lidar, D. A. Scaling advantage in approximate optimization with quantum annealing 2401.07184 (2024).
12. Davies, M et al. Advancing neuromorphic computing with Loihi: a survey of results and Outlook. Proc. IEEE; 2021; 109, pp. 911-934. [DOI: https://dx.doi.org/10.1109/JPROC.2021.3067593]
13. Schuman, CD et al. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci.; 2022; 2, pp. 10-19. [DOI: https://dx.doi.org/10.1038/s43588-021-00184-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38177712]
14. Henke, K., Pelofske, E., Hahn, G. & Kenyon, G. Sampling binary sparse coding QUBO models using a spiking neuromorphic processor. In Proceedings of the 2023 International Conference on Neuromorphic Systems, ICONS ’23 https://doi.org/10.1145/3589737.3606003. (ACM, 2023).
15. Pelofske, E. Dataset for comparing quantum annealing and spiking neuromorphic computing for sampling binary sparse coding QUBO problems https://doi.org/10.5281/zenodo.14735999 (2025).
16. Tropp, JA; Gilbert, AC. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory; 2007; 53, pp. 4655-4666.2446929 [DOI: https://dx.doi.org/10.1109/TIT.2007.909108]
17. Henke, K., Kenyon, G. T. & Migliori, B. Machine Learning in a Post Moore’s Law World: Quantum vs. Neuromorphic Substrates. In 2020 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), 74–77 (2020).
18. Henke, K., Migliori, B. & Kenyon, G. T. Alien vs. Predator: Brain Inspired Sparse Coding Optimization on Neuromorphic and Quantum Devices. In 2020 International Conference on Rebooting Computing (ICRC), 26–33 (2020).
19. Rozell, CJ; Johnson, DH; Baraniuk, RG; Olshausen, BA. Sparse coding via thresholding and local competition in neural circuits. Neural Comput.; 2008; 20, pp. 2526-2563.2445115 [DOI: https://dx.doi.org/10.1162/neco.2008.03-07-486] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18439138]
20. Henke, K; Kenyon, GT; Migliori, B. Fast Post-Hoc normalization for brain inspired sparse coding on a neuromorphic device. IEEE Trans. Parallel Distrib. Syst.; 2022; 33, pp. 302-309. [DOI: https://dx.doi.org/10.1109/TPDS.2021.3068777]
21. Nguyen, N. T. & Kenyon, G. T. Image classification using quantum inference on the d-wave 2x. In 2018 IEEE International Conference on Rebooting Computing (ICRC), 1-7 https://doi.org/10.1109/ICRC.2018.8638596. (IEEE, 2018).
22. Romano, Y. et al. Quantum Sparse Coding (2022).
23. Ide, N. & Ohzeki, M. Sparse Signal Reconstruction with QUBO Formulation in l0-regularized Linear Regression (2022).
24. Ayanzadeh, R., Mousavi, S., Halem, M. & Finin, T. Quantum annealing based binary compressive sensing with matrix uncertainty (2019).
25. D-Wave Systems. D-Wave Documentation: Error Sources for Problem Representation https://docs.dwavesys.com/docs/latest/c_qpu_ice.html (2024).
26. IBM ILOG CPLEX. V12.10.0: User’s manual for CPLEX. Int. Bus. Mach. Corp.; 2019; 46, 157.
27. Xiao, H., Rasul, K. & Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms http://arxiv.org/abs/1708.07747 (2017).
28. Aimone, JB et al. A review of non-cognitive applications for neuromorphic computing. Neuromorphic Comput. Eng.; 2022; 2, 032003. [DOI: https://dx.doi.org/10.1088/2634-4386/ac889c]
29. Guéry-Odelin, D; Muga, JG; Ruiz-Montero, MJ; Trizac, E. Nonequilibrium solutions of the Boltzmann equation under the action of an external force. Phys. Rev. Lett.; 2014; 112, 180602.2014PhRvL.112r0602G [DOI: https://dx.doi.org/10.1103/PhysRevLett.112.180602] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24856683]
30. Santoro, GE; Tosatti, E. Optimization using quantum mechanics: quantum annealing through adiabatic evolution. J. Phys. A Math. Gen.; 2006; 39, R393.2006JPhA..39R.393S22751131:CAS:528:DC%2BD28XhtVKrtbvJ [DOI: https://dx.doi.org/10.1088/0305-4470/39/36/R01]
31. Morita, S; Nishimori, H. Mathematical foundation of quantum annealing. J. Math. Phys.; 2008; 49, 125210.2008JMP..49l5210M2484341 [DOI: https://dx.doi.org/10.1063/1.2995837]
32. Das, A; Chakrabarti, BK. Colloquium: quantum annealing and analog quantum computation. Rev. Mod. Phys.; 2008; 80, 1061.2008RvMP..80.1061D2443721 [DOI: https://dx.doi.org/10.1103/RevModPhys.80.1061]
33. Hauke, P; Katzgraber, HG; Lechner, W; Nishimori, H; Oliver, WD. Perspectives of quantum annealing: methods and implementations. Rep. Prog. Phys.; 2020; 83, 054401.2020RPPh..83e4401H1:CAS:528:DC%2BB3cXitFOltLjN [DOI: https://dx.doi.org/10.1088/1361-6633/ab85b8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32235066]
34. King, AD et al. Coherent quantum annealing in a programmable 2,000 qubit Ising chain. Nat. Phys.; 2022; 18, pp. 1324-1328.1:CAS:528:DC%2BB38XisVWnsrnK [DOI: https://dx.doi.org/10.1038/s41567-022-01741-6]
35. Harris, R et al. Phase transitions in a programmable quantum spin glass simulator. Science; 2018; 361, pp. 162-165.2018Sci..361.162H38212991:CAS:528:DC%2BC1cXhtlahsbjK [DOI: https://dx.doi.org/10.1126/science.aat2025] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30002250]
36. Boixo, S; Albash, T; Spedalieri, FM; Chancellor, N; Lidar, DA. Experimental signature of programmable quantum annealing. Nat. Commun.; 2013; 4, 2013NatCo..4.2067B [DOI: https://dx.doi.org/10.1038/ncomms3067] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23811779]2067.
37. Venturelli, D. et al. Quantum optimization of fully connected spin glasses. Phys. Rev. X5https://doi.org/10.1103/physrevx.5.031040 (2015).
38. Boixo, S et al. Evidence for quantum annealing with more than one hundred qubits. Nat. Phys.; 2014; 10, pp. 218-224.1:CAS:528:DC%2BC2cXjtlyitLw%3D [DOI: https://dx.doi.org/10.1038/nphys2900]
39. Lucas, A. Ising formulations of many NP problems. Front. Phys.; 2014; 2, pp. 1-27. [DOI: https://dx.doi.org/10.3389/fphy.2014.00005]
40. Yarkoni, S., Raponi, E., Schmitt, S. & Bäck, T. Quantum annealing for industry applications: introduction and review. Rep. Prog. Phys.85, 104001 (2022).
41. D-Wave Systems. D-Wave Documentation: QPU-Specific Characteristics https://docs.dwavesys.com/docs/latest/doc_physical_properties.html (2024).
42. Boothby, K., Bunyk, P., Raymond, J. & Roy, A. Next-generation topology of D-wave quantum processors (2020).
43. Zbinden, S., Bärtschi, A., Djidjev, H. & Eidenbenz, S. Embedding algorithms for quantum annealers with chimera and pegasus connection topologies. In International Conference on High Performance Computing, 187–206 (2020).
44. Dattani, N., Szalay, S. & Chancellor, N. Pegasus: the second connectivity graph for large-scale quantum annealing hardware (2019).
45. Choi, V. Minor-embedding in adiabatic quantum computation: I. the parameter setting problem. Quantum Inf. Process.; 2008; 7, pp. 193-209.2008QuIP..7.193C2453604 [DOI: https://dx.doi.org/10.1007/s11128-008-0082-9]
46. Choi, V. Minor-embedding in adiabatic quantum computation: II. minor-universal graph design. Quantum Inf. Process.; 2011; 10, pp. 343-353.2011QuIP..10.343C2794070 [DOI: https://dx.doi.org/10.1007/s11128-010-0200-3]
47. Vinci, W; Albash, T; Paz-Silva, G; Hen, I; Lidar, DA. Quantum annealing correction with minor embedding. Phys. Rev. A; 2015; 92, 042310.2015PhRvA.92d2310V [DOI: https://dx.doi.org/10.1103/PhysRevA.92.042310]
48. D-Wave Systems. D-Wave Ocean Software Documentation: Minorminer https://docs.ocean.dwavesys.com/projects/minorminer/en/latest/ (2017).
49. Pelofske, E; Hahn, G; Djidjev, HN. Parallel quantum annealing. Sci. Rep.; 2022; 12, pp. 1-11. [DOI: https://dx.doi.org/10.1038/s41598-022-08394-8]
50. Pelofske, E., Hahn, G. & Djidjev, H. N. Solving Larger Maximum Clique Problems Using Parallel Quantum Annealing. Quantum Inf. Process.22https://doi.org/10.1007/s11128-023-03962-x (2023).
51. Albash, T; Vinci, W; Mishra, A; Warburton, PA; Lidar, DA. Consistency tests of classical and quantum models for a quantum annealer. Phys. Rev. A; 2015; 91, 042314.2015PhRvA.91d2314A [DOI: https://dx.doi.org/10.1103/PhysRevA.91.042314]
52. D-Wave Systems. D-Wave Tiling https://dwave-systemdocs.readthedocs.io/en/samplers/reference/composites/tiling.html (2024).
53. D-Wave Systems. D-Wave Ocean Software Documentation: Uniform Torque Compensation https://docs.ocean.dwavesys.com/projects/system/en/stable/reference/generated/dwave.embedding.chain_strength.uniform_torque_compensation.html (2018).
54. D-Wave Systems. D-Wave Ocean Software Documentation: Annealing Implementation and Controls https://docs.dwavesys.com/docs/latest/c_qpu_annealing.html (2023).
55. Lanting, T. et al. Probing environmental spin polarization with superconducting flux qubits 2003.14244 (2020).
56. D-Wave Systems. D-Wave Ocean Software Documentation: Solver Parameters https://docs.dwavesys.com/docs/latest/c_solver_parameters.html.
57. Yamashiro, Y; Ohkuwa, M; Nishimori, H; Lidar, DA. Dynamics of reverse annealing for the fully connected p-spin model. Phys. Rev. A; 2019; 100, 052321.2019PhRvA.100e2321Y1:CAS:528:DC%2BB3cXis1amtr4%3D [DOI: https://dx.doi.org/10.1103/PhysRevA.100.052321]
58. Bando, Y; Yip, K-W; Chen, H; Lidar, DA; Nishimori, H. Breakdown of the weak-coupling limit in quantum annealing. Phys. Rev. Appl.; 2022; 17, 054033.2022PhRvP.17e4033B [DOI: https://dx.doi.org/10.1103/PhysRevApplied.17.054033]
59. Arai, S; Ohzeki, M; Tanaka, K. Mean field analysis of reverse annealing for code-division multiple-access multiuser detection. Phys. Rev. Res.; 2021; 3, 033006.1:CAS:528:DC%2BB3MXit1Oru7%2FE [DOI: https://dx.doi.org/10.1103/PhysRevResearch.3.033006]
60. Pelofske, E; Hahn, G; Djidjev, H. Initial state encoding via reverse quantum annealing and H-gain features. IEEE Trans. Quantum Eng.; 2023; 4, pp. 1-21. [DOI: https://dx.doi.org/10.1109/TQE.2023.3319586]
61. King, A. D. et al. Quantum annealing simulation of out-of-equilibrium magnetization in a spin-chain compound. PRX Quantum2, (2021).
62. King, AD et al. Observation of topological phenomena in a programmable lattice of 1,800 qubits. Nature; 2018; 560, pp. 456-460.2018Natur.560.456K1:CAS:528:DC%2BC1cXhsFOgsLnP [DOI: https://dx.doi.org/10.1038/s41586-018-0410-x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30135527]
63. Kairys, P et al. Simulating the Shastry-Sutherland Ising model using quantum annealing. PRX Quantum; 2020; 1, 020320. [DOI: https://dx.doi.org/10.1103/PRXQuantum.1.020320]
64. Lopez-Bezanilla, A. et al. Kagome qubit ice (2023).
65. Kirkpatrick, S; Gelatt Jr, CD; Vecchi, MP. Optimization by simulated annealing. Science; 1983; 220, pp. 671-680.1983Sci..220.671K7024851:STN:280:DC%2BC3cvktFWjtw%3D%3D [DOI: https://dx.doi.org/10.1126/science.220.4598.671] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17813860]
66. D-Wave Systems. D-Wave Simulated Annealing Github https://github.com/dwavesystems/dwave-neal (2024).
67. D-Wave Systems. D-Wave Simulated Annealing Github Sampler https://github.com/dwavesystems/dwave-neal/blob/master/neal/sampler.py (2024).
Copyright Nature Publishing Group Dec 2025