J. Phys. Soc. Jpn. 92, 124002 (2023) [6 Pages]
FULL PAPERS

Hybrid Optimization Method Using Simulated-Annealing-Based Ising Machine and Quantum Annealer

+ Affiliations
1Department of Applied Physics and Physico-Informatics, Keio University, Yokohama 223-8522, Japan2Department of Computer Science and Communications Engineering, Waseda University, Shinjuku, Tokyo 169-8555, Japan3Human Biology-Microbiome-Quantum Research Center (WPI-Bio2Q), Keio University, Minato, Tokyo 108-8345, Japan4Green Computing System Research Organization, Waseda University, Shinjuku, Tokyo 162-0042, Japan5International Research Frontiers Initiative, Tokyo Institute of Technology, Minato, Tokyo 108-0023, Japan

Ising machines have been developed as fast and highly accurate solvers for combinatorial optimization problems. They are classified based on their internal algorithms, with examples including simulated-annealing-based Ising machines (non-quantum-type Ising machines) and quantum-annealing-based Ising machines (quantum annealers). Herein, we have investigated the performance of a hybrid optimization method that capitalizes on the advantages of both types, utilizing a non-quantum-type Ising machine to enhance the performance of the quantum annealer. In this method, the non-quantum-annealing Ising machine initially solves an original Ising model multiple times during preprocessing. Subsequently, reduced-size sub-Ising models, generated by spin fixing, are solved by a quantum annealer. Performance of the method is evaluated via simulations using Simulated Annealing (SA) as a non-quantum-type Ising machine and D-Wave Advantage as a quantum annealer. Additionally, we investigate the parameter dependence of the hybrid optimization method. The method outperforms the preprocessing SA and the quantum annealer alone in fully connected random Ising models.

©2023 The Author(s)
This article is published by the Physical Society of Japan under the terms of the Creative Commons Attribution 4.0 License. Any further distribution of this work must maintain attribution to the author(s) and the title of the article, journal citation, and DOI.
1. Introduction

Ising machines have gained immense attention as accurate and efficient solvers for combinatorial optimization problems.17) These problems find the optimal combination of decision variables to either minimize or maximize the objective function, while satisfying a set of constraints. Combinatorial optimizations are applied in diverse fields such as machine learning,811) materials science,1218) portfolio optimization,19,20) protein folding,21) traffic optimization,2226) quantum compiler,27) and black-box optimization.12,28,29) To tackle combinatorial optimization problems using Ising machines, the problems are mapped as a mathematically constructed model in statistical mechanics called an Ising model or its equivalent model called a quadratic unconstrained binary optimization (QUBO) model.30) QUBO is also referred to as the unconstrained binary quadratic programming (UBQP) problem.31) Some combinatorial optimization problems were also formulated.32,33) Ising machines search for the ground state of an Ising model or a QUBO model, which represents the optimal solution to a problem.

An Ising model is defined on an undirected graph \(G=(V, E)\), where V and E are the sets of vertices and edges, respectively. It consists of spins, magnetic fields, and interactions. The Hamiltonian (or energy function) H of the Ising model is defined as \begin{equation} H = - \sum_{i\in V}h_{i}\sigma_{i} - \sum_{(i,j)\in E} J_{ij}\sigma_{i}\sigma_{j}, \end{equation} (1) where \(\sigma_{i}\) is the spin on the vertex \(i\in V\) and assumes a value of either −1 or +1. The coefficients of the RHS in Eq. (1), \(h_{i}\) and \(J_{ij}\), are the magnetic field on the vertex \(i\in V\) and interaction on the edge \((i, j)\in E\), respectively.

Various internal algorithms operate Ising machines.34) This study focuses on quantum-annealing-based Ising machines (quantum annealers) and simulated-annealing-based Ising machines (non-quantum-type Ising machines). Quantum annealing (QA) is a heuristic algorithm that introduces quantum fluctuations.35) QA can have an advantage over classical methods through quantum tunneling due to their quantum fluctuations. Non-quantum-type Ising machines implement simulated annealing (SA), a popular algorithm to solve combinatorial optimization problems.3638) The current advantage of non-quantum-type Ising machines over quantum annealers is that they can find lower-energy states of larger-scale Ising models. Previous studies have compiled the specifications of various Ising machines.39,40)

Several methods have been proposed to enhance the performance of a quantum annealer.4148) These methods involve two steps. First, smaller subproblems are generated using classical techniques or a quantum annealer through variable fixing or problem decomposition. Second, these subproblems are solved using a quantum annealer. Solving subproblems through this approach has produced improved metrics compared to directly solving the original problem using a quantum annealer in some studies.44,48) However, to the best of our knowledge, no studies have clearly investigated the hybrid methods utilizing a non-quantum type Ising machine to enhance the performance of a quantum annealer.

In this study, we investigated the performance of a hybrid optimization method that combines the advantages of a non-quantum-type Ising machine and a quantum annealer. Furthermore, we analyzed how various parameters influence the performance of the hybrid optimization method. The rest of this paper is organized as follows. Section 2 introduces the hybrid optimization method. Section 3 performs the simulations using SA as a non-quantum-type Ising machine and D-Wave Advantage 4.1 (D-Wave Advantage) as a quantum annealer, with a fully connected Ising model that can be embedded in D-Wave Advantage. The simulations evaluate the performance of the hybrid method compared to the existing ones and investigate the influence of the parameters on the performance. Finally, Sect. 4 concludes our study and provides future directions for further research.

2. Hybrid Optimization Method

Here, we describe in detail our hybrid optimization method inspired by the previous study.46) Figure 1 depicts the method, which comprises four steps.


Figure 1. (Color online) The scheme of the hybrid optimization method. The procedure consists of four steps: sampling, conversion, execution, and evaluation. Gray, blue, and red solid circles indicate that the spin has undetermined values, −1 and +1. Black arrows represent the flow. The sub-Ising model size (m), \(N_{\text{I}}\), \(N_{\text{S}}\), \(N_{\text{E}}\), and \(N_{\text{L}}\) are the method parameters.

In the sampling step, \(N_{\text{I}}\) solutions are sampled using a non-quantum-type Ising machine and registered in the solution pool. Because a non-quantum-type Ising machine shows a stochastic behavior, solutions with different spin states can be sampled. The solution pool may contain duplicate solutions. This step ends by temporarily selecting the solution with the lowest energy in the solution pool as the best solution \(X_{\text{best}}\).

In the conversion step, \(N_{\text{S}}\) solutions \((X_{1}, X_{2},\ldots, X_{N_{\text{S}}})\) are randomly selected from the solution pool. To obtain a sub-Ising model, the spins are fixed using an approach called “sample persistence”43,44) from the \(N_{\text{S}}\) solutions. Duplicate solutions may be included when performing \(N_{\text{S}}\) random selections. Sample persistence assumes that spins with the same value across independently obtained samples are candidates to be fixed at a persistent value. The remaining spins are a group of difficult spins to solve.

Further, the sub-Ising models are obtained as follows. Let n and m be the number of spins in the original Ising model and that in the sub-Ising model, respectively. In the original Ising model, which is represented by Eq. (1), let \(S = \{\sigma_{1},\sigma_{2},\ldots,\sigma_{n}\}\) be a set of spins. First, \(d_{i}\) is calculated for each spin of the \(N_{\text{S}}\) solutions to determine the spins not fixed at −1 or +1 into the sub-Ising model. The value of \(d_{i}\) is given as \begin{equation} d_{i} = \left|\sum^{N_{\text{S}}}_{k=1}\sigma_{i, k}\right|, \end{equation} (2) where \(\sigma_{i, k}\) is the i-th spin in the k-th solution. If \(N_{\text{S}}\) is even (odd), the minimum value of \(d_{i}\) is 0 (1), and the spin is the most unstable. Meanwhile, the maximum value of \(d_{i}\) is \(N_{\text{S}}\), which is a persistent spin. To obtain the more unstable spins, collect m spins of the sub-Ising model in the order of the smallest \(d_{i}\). Let \(S'\) be a set of spins in the sub-Ising model.

Next, randomly select a tentative solution from the \(N_{\text{S}}\) solutions. Let \(\tilde{\sigma}_{i} = (\tilde{\sigma}_{1},\tilde{\sigma}_{2},\ldots,\tilde{\sigma}_{n})\) be a spin state of the tentative solution, where \(\tilde{\sigma}_{i}\) is −1 or 1. Then, the Hamiltonian of the sub-Ising model is generated from \(S'\) and \(\tilde{\sigma}_{i}\) as \begin{equation} H_{\text{sub-Ising model}} = - \sum_{\substack{i\\ \sigma_{i}{\in} S'}}L_{i}\sigma_{i} - \sum_{\substack{(i,j)\\ \sigma_{i},\sigma_{j}{\in} S'}} J_{ij}\sigma_{i}\sigma_{j} + C, \end{equation} (3) where \begin{align} L_{i} = h_{i} + \sum_{\substack{j\\ {\tilde{\sigma}_{j}{\notin} S'}}} J_{ij}\tilde{\sigma_{j}}, \end{align} (4) \begin{align} C = - \sum_{\substack{i\\ \tilde{\sigma_{i}}{\notin} S'}} h_{i} \tilde{\sigma}_{i} - \sum_{\substack{(i,j)\\ {\tilde{\sigma}_{i},\tilde{\sigma}_{j}{\notin} S'}}} J_{ij}\tilde{\sigma}_{i}\tilde{\sigma}_{j}. \end{align} (5)

In the execution step, a quantum annealer searches for the lower-energy states of the sub-Ising model. The spins contained in \(S'\) of the tentative solution are updated with the spin state of the sub-Ising model, which becomes the new solution. Then, the conversion step and the execution step are repeated \(N_{\text{E}}\) times to obtain new \(N_{\text{E}}\) solutions. These new solutions are subsequently added to the solution pool, constructing an expanded solution pool.

In the evaluation step, \(N_{\text{I}}\) low-energy solutions are collected from the expanded solution pool to create a new solution pool. The lowest-energy solution in the new solution pool is updated as the new \(X_{\text{best}}\). Then the flow from the solution pool creation to the evaluation step is iterated using the new solution pool. This is repeated until \(X_{\text{best}}\) remains the same for \(N_{\text{L}}\) consecutive times. Finally, the hybrid optimization method outputs the lowest energy in the solution pool \(X_{\text{best}}\) at the end of the iteration as the solution.

The differences between the algorithm described in our study and the one in the previous study46) are as follows:

Variable fixation method:

The algorithm presented in the previous study46) targeted a QUBO where the values of the variables take either 0 or 1. Meanwhile, in this study, we modified the approach to develop a fixed spin determination algorithm suitable for the Ising model.

Loop initialization point:

The loop initialization point, following the evaluation step, was set before the sampling step by tabu search in the previous study.46) In other words, this method utilized the solutions included in the solution pool as initial inputs for tabu search. However, we anticipate that the preprocessing will be conducted with a non-quantum-type Ising machine. Given the conditions of SA used in our study, solutions can be obtained without being influenced by these initial solutions. Consequently, if we adopt SA in the same flow as the previous algorithm, the information from the solutions in the solution pool would be lost. Therefore, for our study, we adjusted the loop initialization point to be set after the sampling step.

Termination condition:

In the previous study,46) the termination condition was set when the average Hamming distance between pairs of solutions in the solution pool was less than or equal to the subproblem size. In our study, we investigate the influences of changes to this termination condition on the solution accuracy of the hybrid optimization method. Thus, to clarify the effect of parameters, we set the termination condition to when \(X_{\text{best}}\) remains unchanged for \(N_{\text{L}}\) consecutive iterations.

3. Experimental Evaluations
Setup of the experiment

To demonstrate the effectiveness of our hybrid optimization method, we considered a fully connected random Ising model, which was represented as a complete graph. The choice of an Ising model was motivated by the fact that once the spin is fixed on a complete graph, all spins show a uniform edge density in all sub-Ising models. Using other Ising models may lead to various graph structures or the generation of isolated spins in each sub-Ising model, making it difficult to consistently evaluate the hybrid method due to variations in the hardware's influence on D-Wave Advantage. To mitigate this, we adopted a complete graph.

In addition, when the magnetic field coefficients, \(h_{i}=0\), Ising models display a twofold degeneracy for all states, rendering the sub-Ising model generation of the hybrid method ineffective. As a countermeasure, it is proposed to set one randomly chosen spin to either −1 or +1, as described in the previous study.43,44) However, in this study, we set the magnetic field coefficient \(h_{i}\neq 0\) and assumed no trivial degeneracy. The coefficients of the magnetic field and interaction were randomly selected according to a Gaussian distribution with a mean of zero and a standard deviation of unity. Notably, zero was excluded for both the interactions and magnetic fields.

In this study, we performed simulations using SA as a non-quantum-type Ising machine and D-Wave Advantage as a quantum annealer. Most non-quantum-type Ising machines employ SA as the basis of their internal algorithms.2,3,7,4951) The SA parameters included a randomly set initial state and an initial temperature \(T_{\text{initial}}\) of \(\lceil 2v_{\text{max}}\rceil\). Here, \(v_{\text{max}}\) was the maximum value among the \(v_{i}\) defined by \(v_{i}=|h_{i}+\sum_{j}J_{ij}|\), which is the absolute value of the sum of the magnetic fields and interactions for i-th spin in the original Ising model. The initial temperature was set sufficiently high to accommodate for the transition between arbitrary states in the beginning of SA. The temperature schedule was set to the power-law decay for every outer loop, which is given by \(T(t)=T_{\text{initial}}\times r^{t}\), where r is the cooling rate, and t is the t-th outer loop. The outer loop was set for each simulation. The cooling rate was set such that the final temperature was equal to 0.1, which is sufficiently low on the energy scale of the original Ising model. The inner loop was set at 160, which is the size of the original Ising model.

When using D-Wave Advantage, an original Ising model was annealed 100 times, and the best solution was selected. The other parameters of D-Wave Advantage were set to their default values,52) i.e., the annealing time was set to 20 µs, and the problem was rescaled such that the magnetic fields were in the range \([-2, 2]\), and the interactions were in the range \([-1, 1]\). To compare the performance of D-Wave Advantage, we set the number of spins in the original Ising model was set to the maximum number that can be embedded in D-Wave Advantage. D-Wave Advantage used in this study had 5,627 qubits and a Pegasus graph topology,53) allowing a complete graph of 177 spins to be embedded using miner embedding.54) However, several qubits could not be used due to defects. After conducting multiple tests, a stable embeddable size of 160 spins was set as the original Ising model size.

Numerical experiment

First, we performed the hybrid optimization method using several original Ising models to evaluate its performance. Table I summarizes the parameters for the outer loop, sub-Ising model size (m), \(N_{\text{I}}\), \(N_{\text{S}}\), \(N_{\text{E}}\), and \(N_{\text{L}}\). D-Wave Advantage and preprocessing SA were considered for the reference solution accuracy. The parameters for the D-Wave Advantage were utilized consistently in both as a part of the hybrid method and when used independently, as described in Sect. 3.1. Figure 2 shows the energy densities of the solutions obtained by D-Wave Advantage (DW), preprocessing SA (SA), and hybrid optimization method (HM). The SA data use the \(X_{\text{best}}\) obtained from each solution pool as preprocessing, and the energy density is the internal energy per spin (i.e., \(H/n\)). Figure 2 also shows the number of loops spent until the completion of the hybrid method. The number of loops represents the count of returning to the solution pool creation from the evaluation step. As an example, when \(N_{\text{L}} = 3\) and \(X_{\text{best}}\) was not updated at all, the number of loops was two. These data were obtained from the average and standard deviation for ten simulations. The hybrid method achieved a higher solution accuracy than either DW or SA.


Figure 2. (Color online) (Top) Energy densities by different solvers in multiple original Ising models. White circles are the average of ten runs. Top and bottom bars denote the highest and lowest energy densities, respectively. Black diamonds denote the outliers. (Bottom) Number of loops by the hybrid optimization method. Error bars are the standard deviations in ten runs.

Data table
Table I. Parameters of SA and hybrid optimization methods. \(N_{\text{I}}\) is the number of solution samples to be obtained at the sampling step, \(N_{\text{S}}\) is the number of the original Ising models (reference) for generating sub-Ising models at the conversion step, \(N_{\text{E}}\) is the number of newly obtained solutions through the repetition of the conversion and execution, and \(N_{\text{L}}\) is the number of repeats of the same \(X_{\text{best}}\) to end the hybrid method. These parameters are described in Sect. 2 and Fig. 1.

Second, we evaluated the solution accuracy of the preprocessing dependency. This is because in practical scenarios, it can be challenging to set suitable SA parameters for a specific problem, and thus, the accuracy of SA is not guaranteed.37,38,55) The hybrid method was performed using four different preprocessings: random spin states (Random), performing SA with the outer loop set to 10, 50, or 100. The parameters of the hybrid method are the same as those listed in Table I. Figure 3 shows the energy densities and number of loops with different preprocessings. Regardless of the solution accuracy of the preprocessing, the hybrid method improved the solutions from that of preprocessing. Even for the preprocessing with an extremely low solution accuracy (e.g., Random), the hybrid method demonstrated a certain level of solution accuracy. However, a lower preprocessing solution accuracy required more loops to improve the solution. Conversely, when the outer loop = 100, SA and HM indicated almost the same energy density. It is possibly because of the high accuracy of SA, where several of the ten simulation trials resulted in values close to the optimal solution for the problem.


Figure 3. (Color online) Performance of hybrid optimization methods in preprocessing with different solution accuracies. (Top) Energy densities of preprocessing and the hybrid optimization method. White circles are the average of ten runs. Top and bottom bars denote the highest and lowest energy densities, respectively. Black diamonds denote outliers. Scales of energy densities differ between the upper and lower across the omitted portions. (Bottom) Number of loops by the hybrid optimization method. Error bars are standard deviations in ten runs.

Next, we performed the hybrid method with different parameter patterns to evaluate the effect of the parameters on solution accuracy. Figure 4 shows the energy density and number of loops for \(N_{\text{L}}\) set from 1 to 5, while the outer loop was set to 50. All other parameters are the same as those in Table I. Note that the solution pool sets obtained by the preprocessing SA were the same for all conditions of \(N_{\text{L}}\). In addition, Fig. 4 shows the improvement percentage in the \(X_{\text{best}}\) of the preprocessing SA among the ten simulations of the hybrid method. The solutions were improved for \(N_{\text{L}}\geq 2\), although the solution with the highest energy corresponding to the top bar or an outlier in the box plot was not improved. As \(N_{\text{L}}\) increased, both the percentage of improved solutions and number of loops increased. There are two possible reasons for the increased number of loops. First, the hybrid method requires additional loops prior to termination. Second, the solutions improve with increasing \(N_{\text{L}}\).


Figure 4. (Color online) Performance of hybrid optimization methods with different \(N_{\text{L}}\). (Top) Energy densities of preprocessing SA and the hybrid method. White circles are the average of ten runs. Top and bottom bars denote the highest and lowest energy densities, respectively. Black diamonds denote outliers. Gray squares denote the percentage of improved solutions from the preprocessing SA in the ten simulations of the hybrid optimization method. (Bottom) Number of loops by the hybrid optimization method. Error bars are standard deviations in ten runs.

Figure 5 shows the energy density and number of loops with some sets of \(N_{\text{E}}\) and \(N_{\text{S}}\). Note that the sets of solution pools obtained by the preprocessing SA were the same for all conditions of \(N_{\text{S}}\) and \(N_{\text{E}}\), and the outer loop was set to 50. All other parameters are the same as those listed in Table I. The hybrid method better improved the solution more when \(N_{\text{E}}\) was 10 or 20 compared to when it was 5. This is because a larger number of \(N_{\text{E}}\) resulted in more solutions generated per loop, and there was a higher possibility of solution improvement prior to the loop termination. Additionally, the effect of \(N_{\text{S}}\) was not significant when \(N_{\text{E}}\) was the same. The number of loops was higher for \(N_{\text{E}}\) of 10 and 20, which is where an improvement is observed.


Figure 5. (Color online) Performance of hybrid optimization methods with different sets of \(N_{\text{E}}\) and \(N_{\text{S}}\). (Top) Energy densities of preprocessing SA and the hybrid optimization method. White circles are the average of ten runs. Top and bottom bars denote the highest and lowest energy densities, respectively. Black diamonds denote outliers. (Bottom) Number of loops by the hybrid optimization method. Error bars are standard deviations.

Finally, we performed the hybrid method with different sub-Ising model sizes to evaluate the sub-Ising model size dependency. Figure 6 shows the energy density and number of loops for the sub-Ising model sizes (m) of 16, 40, 80, 120, and 144. Notably, the sets of solution pools obtained by the preprocessing SA were the same for all sub-Ising model sizes. The outer loop was set to 50. All other parameters are the same as those mentioned in Table I. Constructing sub-Ising models with extremely small or large numbers of spins (e.g., a sub-Ising model size = 16 or 144) relative to the size of the original Ising model (\(n=160\)) do not improve the solution using the hybrid method (Fig. 6). When the sub-Ising model size was small, the fixed spins significantly influenced the sub-Ising model, and there was little freedom in the obtained solutions. By contrast, when the sub-Ising model size was large, the influence of the solution accuracy of D-Wave Advantage became more significant. If the solution accuracy of D-Wave Advantage for the original Ising model of size 160 is low (shown in Fig. 2), the sub-Ising model size increases, the solution accuracy decreases, and the solution does not improve. These results suggest that the solution accuracy of the hybrid method depends on the sub-Ising model size.


Figure 6. (Color online) Performance of hybrid optimization methods with different sub-Ising model size. (Top) Energy densities of preprocessing SA and the hybrid method. White circles are the average of ten runs. Top and bottom bars denote the highest and lowest energy densities, respectively. (Bottom) Number of loops by the hybrid optimization method. Error bars are standard deviations.

4. Conclusion and Future Work

We evaluated the performance of the hybrid optimization method, which leverages the advantages of both non-quantum-type Ising machines and quantum annealers, through simulations using SA and D-Wave Advantage. Furthermore, we investigated the relationship between various parameters and the performance of the hybrid optimization method.

The hybrid method achieved a higher solution accuracy than the D-Wave Advantage and preprocessing SA alone. Regardless of the preprocessing accuracy, the hybrid method improved the solution. Even in practical scenarios where the solution accuracy of SA is not guaranteed, the possibility of improving the solution through the hybrid method has been suggested. However, there is a trade-off between the preprocessing accuracy and number of loops for solution improvement. We also evaluated the effect of the hybrid method parameters on the solution accuracy. Increasing the \(N_{\text{L}}\) value, which determines the termination condition of the hybrid method, improves the solution accuracy while simultaneously increasing the number of loops. Next, we assessed the solution accuracy of the hybrid method with multiple combinations of \(N_{\text{E}}\) and \(N_{\text{S}}\). The solution is enhanced when \(N_{\text{E}}\) is 10 or 20 compared to when it is 5. However, appropriate sub-Ising model spins should be selected because the hybrid method has a sub-Ising model dependency. The dependency of the sub-Ising model size and accuracy dependency of a preprocessing optimization method, which significantly affects the performance of the hybrid method, has not been mentioned in a previous study investigating analogous algorithms.46) Analyzing the characteristics of the hybrid method is important for its practical application.

This method first constructs a solution pool with a non-quantum-type Ising machine. Then it solves \(N_{\text{E}}\) sub-Ising models using a quantum annealer. Using the hybrid method, it is more beneficial to solve multiple reduced-spin-size sub-Ising models embedding the original Ising model size into D-Wave Advantage. If the sub-Ising models can obtain solutions in parallel using parallel quantum annealing,56) the hybrid method should be accelerated. This approach should be considered in the future. We evaluated the performance of the hybrid method on the original Ising model with a problem size that can be embedded in D-Wave Advantage in this study. However, several methods42,4547) have been proposed to solve larger-sized problems because the problem size that can be embedded in the quantum annealer is limited. Therefore, the performance of the hybrid method should be evaluated in future using an original Ising model that cannot be embedded in a quantum annealer. Although the performance evaluation of the hybrid method was conducted using simulations with SA and D-Wave Advantage, the hybrid method should be evaluated using an actual Ising machine in the future. Furthermore, in this study, a theoretical investigation on the origin of the improved solution accuracy by the hybrid method is insufficient. As a theoretical investigation should contribute to parameter setting and performance improvement, it remains as a future work.

Acknowledgments

This article is based on the results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). The computations in this work were partially performed using the facilities of the Supercomputer Center, the Institute for Solid State Physics, The University of Tokyo. S.T. was supported in part by JSPS KAKENHI (Grant Numbers JP21K03391, JP23H05447) and JST Grant Number JPMJPF2221. The Human Biology-Microbiome-Quantum Research Center (Bio2Q) is supported by the World Premier International Research Center Initiative (WPI), MEXT, Japan.


References

  • 1 M. W. Johnson, M. H. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C. J. S. Truncik, S. Uchaikin, J. Wang, B. Wilson, and G. Rose, Nature 473, 194 (2011). 10.1038/nature10012 CrossrefGoogle Scholar
  • 2 M. Yamaoka, C. Yoshimura, M. Hayashi, T. Okuyama, H. Aoki, and H. Mizuno, IEEE J. Solid-State Circuits 51, 303 (2016). 10.1109/JSSC.2015.2498601 CrossrefGoogle Scholar
  • 3 M. Aramon, G. Rosenberg, E. Valiante, T. Miyazawa, H. Tamura, and H. G. Katzgraber, Front. Phys. 7, 48 (2019). 10.3389/fphy.2019.00048 CrossrefGoogle Scholar
  • 4 T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, O. Tadanaga, H. Takenouchi, K. Aihara, K.-i. Kawarabayashi, K. Inoue, S. Utsunomiya, and H. Takesue, Science 354, 603 (2016). 10.1126/science.aah4243 CrossrefGoogle Scholar
  • 5 H. Goto, K. Tatsumura, and A. R. Dixon, Sci. Adv. 5, eaav2372 (2019). 10.1126/sciadv.aav2372 CrossrefGoogle Scholar
  • 6 M. Maezawa, G. Fujii, M. Hidaka, K. Imafuku, K. Kikuchi, H. Koike, K. Makise, S. Nagasawa, H. Nakagawa, M. Ukibe, and S. Kawabata, J. Phys. Soc. Jpn. 88, 061012 (2019). 10.7566/JPSJ.88.061012 LinkGoogle Scholar
  • 7 K. Yamamoto, K. Kawamura, K. Ando, N. Mertig, T. Takemoto, M. Yamaoka, H. Teramoto, A. Sakai, S. Takamaeda-Yamazaki, and M. Motomura, IEEE J. Solid-State Circuits (2020). 10.1109/JSSC.2020.3027702 CrossrefGoogle Scholar
  • 8 H. Neven, V. S. Denchev, M. Drew-Brook, J. Zhang, W. G. Macready, and G. Rose, Quantum 4 (2009). Google Scholar
  • 9 M. H. Amin, Phys. Rev. A 92, 052323 (2015). 10.1103/PhysRevA.92.052323 CrossrefGoogle Scholar
  • 10 M. H. Amin, E. Andriyash, J. Rolfe, B. Kulchytskyy, and R. Melko, Phys. Rev. X 8, 021050 (2018). 10.1103/PhysRevX.8.021050 CrossrefGoogle Scholar
  • 11 D. O’Malley, V. V. Vesselinov, B. S. Alexandrov, and L. B. Alexandrov, PLOS ONE 13, e0206653 (2018). 10.1371/journal.pone.0206653 CrossrefGoogle Scholar
  • 12 K. Kitai, J. Guo, S. Ju, S. Tanaka, K. Tsuda, J. Shiomi, and R. Tamura, Phys. Rev. Res. 2, 013319 (2020). 10.1103/PhysRevResearch.2.013319 CrossrefGoogle Scholar
  • 13 K. Utimula, T. Ichibha, G. I. Prayogo, K. Hongo, K. Nakano, and R. Maezono, Sci. Rep. 11, 7261 (2021). 10.1038/s41598-021-86274-3 CrossrefGoogle Scholar
  • 14 T. Inoue, Y. Seki, S. Tanaka, N. Togawa, K. Ishizaki, and S. Noda, Opt. Express 30, 43503 (2022). 10.1364/OE.476839 CrossrefGoogle Scholar
  • 15 K. Endo, Y. Matsuda, S. Tanaka, and M. Muramatsu, Sci. Rep. 12, 10794 (2022). 10.1038/s41598-022-14735-4 CrossrefGoogle Scholar
  • 16 H. Sampei, K. Saegusa, K. Chishima, T. Higo, S. Tanaka, Y. Yayama, M. Nakamura, K. Kimura, and Y. Sekine, JACS 3, 991 (2023). 10.1021/jacsau.3c00018 CrossrefGoogle Scholar
  • 17 A. Tučs, F. Berenger, A. Yumoto, R. Tamura, T. Uzawa, and K. Tsuda, ACS Med. Chem. Lett. 14, 577 (2023). 10.1021/acsmedchemlett.2c00487 CrossrefGoogle Scholar
  • 18 K. Hatakeyama-Sato, Y. Uchima, T. Kashikawa, K. Kimura, and K. Oyaizu, RSC Adv. 13, 14651 (2023). 10.1039/D3RA01982A CrossrefGoogle Scholar
  • 19 G. Rosenberg, P. Haghnegahdar, P. Goddard, P. Carr, K. Wu, and M. L. De Prado, IEEE J. Sel. Top. Signal Process. 10, 1053 (2016). 10.1109/JSTSP.2016.2574703 CrossrefGoogle Scholar
  • 20 K. Tanahashi, S. Takayanagi, T. Motohashi, and S. Tanaka, J. Phys. Soc. Jpn. 88, 061010 (2019). 10.7566/JPSJ.88.061010 LinkGoogle Scholar
  • 21 A. Perdomo-Ortiz, N. Dickson, M. Drew-Brook, G. Rose, and A. Aspuru-Guzik, Sci. Rep. 2, 571 (2012). 10.1038/srep00571 CrossrefGoogle Scholar
  • 22 F. Neukart, G. Compostella, C. Seidel, D. von Dollen, S. Yarkoni, and B. Parney, Front. ICT 4, 29 (2017). 10.3389/fict.2017.00029 CrossrefGoogle Scholar
  • 23 H. Irie, G. Wongpaisarnsin, M. Terabe, A. Miki, and S. Taguchi, International Workshop on Quantum Technology and Optimization Problems, 2019, p. 145. 10.1007/978-3-030-14082-3_13 CrossrefGoogle Scholar
  • 24 S. Bao, M. Tawada, S. Tanaka, and N. Togawa, International Symposium on VLSI Design, Automation and Test (VLSI-DAT), 2021. 10.1109/VLSI-DAT52063.2021.9427355 CrossrefGoogle Scholar
  • 25 S. Bao, M. Tawada, S. Tanaka, and N. Togawa, IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, p. 3704. 10.1109/ITSC48978.2021.9564593 CrossrefGoogle Scholar
  • 26 Y. Mukasa, T. Wakaizumi, S. Tanaka, and N. Togawa, IEICE Trans. Inf. Syst. E104-D, 1592 (2021). 10.1587/transinf.2020EDP7264 CrossrefGoogle Scholar
  • 27 S. Naito, Y. Hasegawa, Y. Matsuda, and S. Tanaka, arXiv:2303.02830. Google Scholar
  • 28 S. Izawa, K. Kitai, S. Tanaka, R. Tamura, and K. Tsuda, Phys. Rev. Res. 4, 023062 (2022). 10.1103/PhysRevResearch.4.023062 CrossrefGoogle Scholar
  • 29 Y. Seki, R. Tamura, and S. Tanaka, arXiv:2209.01016. Google Scholar
  • 30 S. Tanaka, Y. Matsuda, and N. Togawa, 25th Asia and South Pacific Design Automation Conference (ASP-DAC), 2020, p. 659. 10.1109/ASP-DAC47756.2020.9045126 CrossrefGoogle Scholar
  • 31 G. Kochenberger, J.-K. Hao, F. Glover, M. Lewis, Z. Lü, H. Wang, and Y. Wang, J. Comb. Optim. 28, 58 (2014). 10.1007/s10878-014-9734-0 CrossrefGoogle Scholar
  • 32 A. Lucas, Front. Phys. 2, 5 (2014). 10.3389/fphy.2014.00005 CrossrefGoogle Scholar
  • 33 S. Tanaka, R. Tamura, and B. K. Chakrabarti, Quantum Spin Glasses, Annealing and Computation (Cambridge University Press, Cambridge, U.K., 2017). Google Scholar
  • 34 N. Mohseni, P. L. McMahon, and T. Byrnes, Nat. Rev. Phys. 4, 363 (2022). 10.1038/s42254-022-00440-8 CrossrefGoogle Scholar
  • 35 T. Kadowaki and H. Nishimori, Phys. Rev. E 58, 5355 (1998). 10.1103/PhysRevE.58.5355 CrossrefGoogle Scholar
  • 36 S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Science 220, 671 (1983). 10.1126/science.220.4598.671 CrossrefGoogle Scholar
  • 37 D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon, Oper. Res. 37, 865 (1989). 10.1287/opre.37.6.865 CrossrefGoogle Scholar
  • 38 D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon, Oper. Res. 39, 378 (1991). 10.1287/opre.39.3.378 CrossrefGoogle Scholar
  • 39 D. Oku, M. Tawada, S. Tanaka, and N. Togawa, IEEE Trans. Comput. 71, 223 (2022). 10.1109/TC.2020.3045112 CrossrefGoogle Scholar
  • 40 M. Kowalsky, T. Albash, I. Hen, and D. A. Lidar, Quantum Sci. Technol. 7, 025008 (2022). 10.1088/2058-9565/ac4d1b CrossrefGoogle Scholar
  • 41 G. Rosenberg, M. Vazifeh, B. Woods, and E. Haber, Comput. Optim. Appl. 65, 845 (2016). 10.1007/s10589-016-9844-y CrossrefGoogle Scholar
  • 42 M. Booth, S. P. Reinhardt, and A. Roy, D-Wave Technical Report Series 14-1006A-A, 2017. [https://docs.ocean.dwavesys.com/projects/qbsolv/en/latest/index.html]. Google Scholar
  • 43 H. Karimi, G. Rosenberg, and H. G. Katzgraber, Phys. Rev. E 96, 043312 (2017). 10.1103/PhysRevE.96.043312 CrossrefGoogle Scholar
  • 44 H. Karimi and G. Rosenberg, Quantum Inf. Process. 16, 166 (2017). 10.1007/s11128-017-1615-x CrossrefGoogle Scholar
  • 45 H. Irie, H. Liang, T. Doi, S. Gongyo, and T. Hatsuda, Sci. Rep. 11, 8426 (2021). 10.1038/s41598-021-87676-z CrossrefGoogle Scholar
  • 46 Y. Atobe, M. Tawada, and N. Togawa, IEEE Trans. Comput. 71, 2606 (2022). 10.1109/TC.2021.3138629 CrossrefGoogle Scholar
  • 47 E. Pelofske, G. Hahn, and H. Djidjev, J. Signal Process. Syst. 93, 405 (2021). 10.1007/s11265-020-01550-1 CrossrefGoogle Scholar
  • 48 E. Pelofske, G. Hahn, and H. N. Djidjev, Quantum Inf. Process. 22, 219 (2023). 10.1007/s11128-023-03962-x CrossrefGoogle Scholar
  • 49 C. Yoshimura, M. Hayashi, T. Okuyama, and M. Yamaoka, Int. J. Netw. Comput. 7, 154 (2017). 10.15803/ijnc.7.2_154 CrossrefGoogle Scholar
  • 50 T. Okuyama, M. Hayashi, and M. Yamaoka, IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings, 2017. 10.1109/ICRC.2017.8123652 CrossrefGoogle Scholar
  • 51 Fixstars Amplify Annealing Engine: Fixstars Amplify [https://amplify.fixstars.com/en/]. Google Scholar
  • 52 Solver Parameters - D-Wave System Documentation [https://docs.dwavesys.com/docs/latest/c_solver_parameters.html]. Google Scholar
  • 53 N. Dattani, S. Szalay, and N. Chancellor, arXiv:1901.07636. Google Scholar
  • 54 The Advantage System: Performance Update [https://www.dwavesys.com/media/kjtlcemb/14-1054a-a_advantage_system_performance_update.pdf]. Google Scholar
  • 55 L. Ingber, Math. Comput. Modelling 18, 29 (1993). 10.1016/0895-7177(93)90204-C CrossrefGoogle Scholar
  • 56 E. Pelofske, G. Hahn, and H. N. Djidjev, Sci. Rep. 12, 4499 (2022).10.1038/s41598-022-08394-8 CrossrefGoogle Scholar