- Full text:
- PDF (eReader) / PDF (Download) (2549 kB)
The Berezinskii–Kosterlitz–Thouless (BKT) transition is a typical topological phase transition defined between binding and unbinding states of vortices and antivortices, which is not accompanied by spontaneous symmetry breaking. It is known that the BKT transition is difficult to detect from thermodynamic quantities such as specific heat and magnetic susceptibility because of the absence of anomaly in free energy and significant finite-size effects. Therefore, methods based on statistical mechanics which are commonly used to discuss phase transitions cannot be easily applied to the BKT transition. In recent years, several attempts to detect the BKT transition using machine-learning methods based on image recognition techniques have been reported. However, it has turned out that the detection is difficult even for machine learning methods because of the absence of trivial order parameters and symmetry breaking. Most of the methods proposed so far require prior knowledge about the models and/or preprocessing of input data for feature engineering, which is problematic in terms of the general applicability. In this article, we introduce recent development of the machine-learning methods to detect the BKT transitions in several spin models. Specifically, we demonstrate the success of two new methods named temperature-identification method and phase-classification method for detecting the BKT transitions in the q-state clock model and the XXZ model. This progress is expected to sublimate the machine-learning-based study of spin models for exploring new physics beyond simple benchmark test.

Classical spin models such as Ising models, clock models, XY models, and XXZ models are of interest not only as theoretical models of magnetic materials but also as fundamental mathematical models for various natural and social phenomena and information processing architectures.1,2) Their statistical mechanical properties have been intensively studied for many years. In particular, the phase-transition phenomena exhibited by these spin models are important topics in statistical mechanics and condensed-matter physics, which have been investigated intensively not only for pure theoretical science but also for exploration of material functions of magnets.
Some of these spin models exhibit topologically characterized phase transitions with no symmetry breaking in addition to ordinary phase transitions accompanied by symmetry breaking. One example is the Berezinskii–Kosterlitz–Thouless (BKT) transition,3–6) which shows up in some two-dimensional classical spin models, e.g., XY models, XXZ models, and the q-state clock models. In this article, we discuss recently proposed powerful and efficient machine learning methods to detect this phase transition.7,8) The BKT transition can be regarded as a vortex–antivortex bound–unbound phase transition, which occurs between the paramagnetic phase at high temperatures and the BKT phase at low temperatures [see Figs. 1(a)–1(c)]. In the BKT phase, vortices and antivortices formed by spin vectors appear always in pairs. On the contrary, they appear in individual form in the paramagnetic phase. According to the Mermin–Wagner theorem, long-range order with symmetry breaking is forbidden at finite temperatures for models with continuous spin variables such as the XY models and the classical Heisenberg models in one and two dimensions.9) Hence, the BKT transition is an important phase transition in low-dimensional spin systems. Its research has a history of over 50 years since the first prediction by Berezinskii3,4) and the subsequent theoretical confirmation by Kosterlitz and Thouless.5,6)
Figure 1. (Color online) (a, b) Schematics of a vortex and an antivortex formed by spin vectors in classical lattice spin models. (c) Schematic phase diagram of the BKT transition. With decreasing temperature, a phase transition occurs from the high-temperature paramagnetic phase to the low-temperature BKT phase. There appear individual vortices and antivortices in the paramagnetic phase, whereas they always appear as vortex-antivortex pairs in the BKT phase. The BKT transition is a topological phase transition, which is not accompanied by spontaneous symmetry breaking.
However, the BKT transition is known to be difficult to treat by numerical analyses based on statistical mechanics such as Monte Carlo methods as compared to ordinary phase transitions. The reason is that thermodynamic quantities such as specific heat and magnetic susceptibility, which are usually used for theoretical detection of ordinary phase transitions, cannot be easily used to determine the BKT-transition point. For example, in the case of ordinary phase transitions, specific heat shows an anomaly such as peak or jump at the transition point, but in the case of the BKT transition, an anomaly of the specific heat does not correspond to its transition point. On the other hand, it is, in principle, possible to determine the BKT-transition point from the anomaly of magnetic susceptibility. However, because of significant finite-size effects with logarithmic correction terms, extrapolation to the thermodynamic limit based on the conventional finite-size scaling is difficult.
Under these circumstances, several attempts have been made to detect the BKT transitions by using machine learning methods rather than conventional methods based on statistical mechanics recently. However, it is not necessarily easy to detect the BKT transitions even for machine learning methods. This is because the methods are basically based on image recognition techniques,10) but the BKT transition does not have any characteristic order parameters and symmetry breaking. In fact, there are only limited examples of successful detections of the BKT transitions by machine learning.7,8,11–21) Moreover, it should be mentioned that most of the successful methods are supervised methods, while only few unsupervised methods succeeded in the detection. One successful unsupervised method is based on dimensionality reduction using a diffusion-map technique.18) This situation is in sharp contrast to the case of ordinary phase transitions with symmetry breaking in, e.g., Ising models and Potts models, to which the machine learning methods can easily detect their phase transitions because there are clear changes in the order parameters through the phase transitions. For the ordinary phase transitions, not only supervised methods but also many unsupervised methods have successfully been used for their detections.22–33)
Furthermore, most of the methods which succeeded in detection of the BKT transitions, in fact, require preprocessed data rather than raw spin-configuration data for input data. For example, spatial configurations of vortices, histograms of spin orientations, or spin correlation functions are required to be prepared from the raw spin-configuration data in advance. In other words, these methods depend on arbitrary selection of feature quantities of the transitions and phases and thus are not applicable to general cases. A more critical problem is that these methods require prior knowledge about the model in advance such as the number of phases and approximate transition temperatures. As can be easily recognized, this is a contradictory requirement in the sense that we have to know properties and behaviors of the model to be studied before the investigation, which can hardly withstand exploration of new physics. It is difficult to detect the BKT transitions with neither symmetry breaking nor well-defined order parameters for machine learning methods based on the pattern classification or image recognition techniques. The establishment of powerful and versatile machine learning methods for the BKT transitions has been highly demanded.
In the meanwhile, there are no reports of experimental observation of BKT transition in real magnetic materials, despite a half-century of long research history.34–38) One of its reasons is that we do not have a general theoretical framework to discuss a possibility of the BKT transition in complex spin models describing real materials. In other words, we do not have means to theoretically search for materials that exhibit the BKT transition using microscopic models which contain several complex interactions (e.g., Dzyaloshinskii–Moriya interactions and biquadratic interactions) and several magnetic anisotropies in addition to ordinary exchange interactions. To further deepen and develop the research on topological phase transitions, experimental studies using real magnetic materials that exhibit BKT transitions are essentially important. For this purpose, construction of machine learning methods to detect topological phases and topological phase transitions in microscopic spin models for real magnetic materials is an urgent issue.
In this article, we present recent attempts to construct machine learning methods aiming at the detection of BKT transitions in spin models. In particular, we focus on recently proposed two methods, which are named phase-classification method and temperature-identification method. We demonstrate their efficiency by applying these methods to two important spin models that exhibit BKT transitions, i.e., the q-state clock models and the XXZ models. It is shown that the methods can detect the BKT transitions and determined their transition temperatures in most cases with high accuracy where only minimal prior knowledge about the models and prior data-processing for feature engineering are required. The rest of this article is structured as follows. We introduce properties of the BKT transition in in Sect. 2 and several classical spin models which exhibit the BKT transitions in Sect. 3. In Sect. 4, we explain recently proposed powerful machine learning methods to detect the BKT transitions. In Sects. 5–7, we demonstrate that the methods can detect both the BKT transitions and the second-order phase transitions in the q-state clock models and the XXZ models on square lattices. In Sect. 8, we compare these new machine learning methods with conventional methods based on statistical mechanics such as the Monte Carlo methods as well as previously proposed other machine learning methods by particularly focusing on the advantages and disadvantages. Section 9 is devoted to the summary and conclusion.
In 1971, Berezinskii argued that the spatial dependence of spin correlation function differs between low and high temperatures in the XY model.3,4) On this basis, he predicted the presence of a novel type of phase transition in this model. Subsequently, Kosterlitz and Thouless confirmed presence of the predicted phase transition and elucidated its physical properties using renormalization group analyses.5,6) This phase transition is called BKT transition after the names of these three physicists. The BKT transition is a kind of topological phase transition beyond the framework of Landau theory based on order parameters. The BKT transition is defined as a phase transition between bounded and unbounded states of vortices and antivortices and does not exhibit symmetry breaking. Here vortices and antivortices in classical spin models are formed by spin vectors. In the case of a square lattice, as we trace the four corner sites of each plaquette in a counterclockwise manner, the (anti)vortex is defined as a spin configuration at the four sites rotating in a counterclockwise (clockwise) sense. At low temperatures, vortices and antivortices always appear forming vortex-antivortex pairs, and this state is called BKT phase. On the contrary, a paramagnetic phase appears at high temperatures, in which there exist many individual or unbounded vortices and antivortices. The BKT phase exhibits behaviors distinct from ordinary phases with broken symmetry. For example, the spin correlation length is always of power-law decay with respect to distance.
It is known that the BKT transition is difficult to detect by computational methods based on statistical mechanics because of the absence of symmetry breaking, absence of anomaly of the free energy at the transition point, and significant finite-size effects with logarithmic correction terms. In particular, it is difficult to apply methods usually employed to study ordinary phase transitions. For example, uniform magnetization is zero at all temperatures in the thermodynamic limit and cannot be exploited as an order parameter or a signal of the BKT transition. Since the free energy does not have an anomaly with respect to temperature, the specific heat shows no anomaly at the BKT-transition point. On the other hand, the magnetic susceptibility shows divergence or jump at the BKT-transition point, and thus might be used to determine the transition temperature in principle. However, finite-size effects are significant, which contain logarithmic correction terms, and thus the determination of BKT-transition temperature requires an enormous computational cost.
A physical quantity called helicity modulus γ is often used to detect the BKT transition and to identify its transition temperature by methods based on statistical mechanics.39) This quantity describes hardness of spin order and is defined as the second-order response coefficient of free energy with respect to the global twist of the spin alignment. Its definition is given by,
In the thermodynamic limit, the helicity modulus exhibits a discontinuous jump from 0 to
The XY model, the q-state clock model, and the classical XXZ model are typical examples of classical spin models which exhibit the BKT transition. We explain each of these models in the following.
The Hamiltonian of the XY model is given by,
The Hamiltonian of the q-state clock model is given by,
In the discrete limit of
Figure 2. (Color online) Schematic temperature phase diagrams of the q-state clock model on a square lattice. (a) When
The Hamiltonian of the classical XXZ model is given by,
Figure 3. (Color online) Schematic temperature phase diagrams of the XXZ model on a square lattice. (a) For the Ising-like case with
In this section, we introduce two recently proposed versatile machine learning architectures to detect phase transitions, that is, the phase-classification (PC) method and the temperature-identification (TI) method. These neural networks can be easily implemented by the machine learning library KERAS.54)
In the PC method, the detection of phase transitions in a model to be studied is performed by solving a classification problem for other models.55–57) Figure 4 shows an example of the basic structure of a neural network used in the PC method, which is composed of one input layer, two or three hidden layers, and one output layer. The Rectified Linear Unit (ReLU) function is used as the activation function in the hidden layers, while the softmax function in the output layer.
Figure 4. (Color online) Basic structure of a neural network used in the PC method. The output is a vector whose dimension is the number of possible phases. Each component of the vector represents the probability that the input spin or vortex configuration belongs to the corresponding phase. A lattice system of
The ReLU function is defined by,
For the training data, spin configurations of the Ising model can be exploited for example, when we attempt to detect the second-order phase transition from paramagnetic to ferromagnetic phases. On the other hand, when we attempt to detect the BKT transition, spin or vortex configurations of the XY model can be exploited for example. These spin and vortex configurations are generated by using the Monte Carlo thermalization technique. When two phases may appear as these examples, the answer for output is set as
After training, the spin or vortex configurations of the model to be studied are fed to the neural network. For input data generated at various temperatures T, a vector
Finally, we briefly explain how to prepare the input data. The input spin-configurations are represented by vectors whose components are spin components at all sites. For example, the spin configurations for three-dimensional classical spins on
A unique feature of the PC method is that it does not use the spin and vortex configurations of the target model for which the phase transition is to be detected in training the neural network. Instead, only spin and vortex configurations of models whose properties of phase transitions are well known, such as the Ising model and the XY model, are used for training. This method enables us to investigate phase transitions of unknown models using a well-known model as a steppingstone.
The TI method detects phase transitions of the spin model through executing a temperature estimation task by a neural network.22,24) Figure 5 shows an example of the basic structure of a neural network used in the TI method. Here, we consider a neural network consisting of one input layer, three hidden layers, and one output layer. The ReLU function is used as the activation function in the first and third hidden layers, and the softmax function in the second hidden layer and output layer. The inputs of the neural network are spin or vortex configurations. The output is an N-component vector (
Figure 5. (Color online) Basic structure of a neural network used in the TI method. The output is a vector whose dimension is the number of temperature points. Each component of the vector represents the probability that the input spin or vortex configuration is generated at the corresponding temperature range. A lattice system of
The training data are spin or vortex configurations generated by the Monte Carlo thermalization technique. The answers for outputs are vectors representing the temperature at which the input spin or vortex configuration was generated. Specifically, if the spin or vortex configuration generated at temperature
After the training, we focus on the weight matrix
To analyze the pattern change of the heat map quantitatively, the correlation function
A unique feature of the TI method is that it does not require prior knowledge of phase transitions, e.g., transition temperatures and the number of possible phases, of the target model for training. The answers of output for training data are vectors representing the temperature at which the input spin or vortex configuration is generated. Namely, the answers are computational conditions of the Monte Carlo thermalization calculations and thus are known. Another unique feature is that the neural network of the TI method is trained only and no estimation by data input is performed after training.
To demonstrate the TI method introduced in the previous section, we discuss the detection of a single second-order phase transition and successive BKT transitions in the q-state clock models.
We first discuss the detection of the single second-order phase transition in the four-state (
Figure 6. (Color online) (a) Heat map of the weight matrix connecting the last hidden layer and the output layer of the neural network in the TI method for the four-state (
We next discuss the detection of the two BKT transitions in the eight-state (
Figure 7. (Color online) (a) Heat map of the weight matrix connecting the last hidden layer and the output layer of the neural network in the TI method for the eight-state (
|
According to the theory developed by Kosterlitz and Thouless, the correlation length ξ is given by
Figure 8. (Color online) System-size scaling of the two transition temperatures
Here we discuss the detection of the second-order phase transition in the Ising-like XXZ model with
Figure 9(a) shows the temperature profiles of
Figure 9. (Color online) (a) Temperature profiles of the averaged outputs
|
We then discuss the detection of the second-order phase transition in the Ising-like XXZ model with
Figure 10. (Color online) (a) Heat map of the weight matrix connecting the last hidden layer and the output layer of the neural network in the TI method. The neural network is trained using the spin-configuration data of the Ising-like XXZ model with
We discuss the detection of the BKT transition in the XY-like XXZ model with
Figure 11(a) shows the temperature profiles of
Figure 11. (Color online) (a) Temperature profiles of the averaged outputs
|
Finally, we discuss the detection of the BKT transition in the XY-like XXZ model with
Figure 12. (Color online) (a) Heat map of the weight matrix connecting the last hidden layer and the output layer of the convolutional neural network in the TI method. The neural network is trained using the vortex-configuration data of the XY-like XXZ model with
To quantify the pattern changes, the correlation function
According to the temperature profile of vortex density calculated using the Monte Carlo method (not shown),8) it seemingly begins to rise from zero at a temperature around the points A and A
For the machine-learning detection of BKT transitions, numerous methods and their demonstrations have been reported so far. In Table V, previous methods and their features are summarized. According to this table, we find that most of the previous methods are based on supervised learning. They require either prior knowledge of the model or feature engineering of the data or even both. Some methods need to know approximate values of transition temperatures of the model in advance to prepare the training data. Namely, they use spin or vortex configurations generated at several temperatures for training, each of which is labeled by a name of the corresponding phase according to the knowledge of the transition temperatures. This means that although our aim is to detect unknown phase transitions in a target model, we must have prior knowledge of the phase transitions to generate the training data. This aspect is problematic when we aim at exploring new physics and unknown phenomena in the model. Feature engineering can also be problematic for this purpose because it requires prior knowledge of features that characterize the model or phase transitions.
|
In this sense, the two methods discussed in this article, i.e., the PC method and the TI method, have advantages over previous methods. First, both the discussed methods require no or only minimal prior knowledge about properties of a target model such as transition temperatures, order parameters, and the number of phases. Specifically, only the number of phases must be known in advance for the PC method, while any of these properties do not have to be known for the TI method to detect the BKT transitions. Second, both the discussed methods require no or only minimal preprocessing of data for feature engineering to prepare the input data. It has been demonstrated that to detect the BKT transitions, only raw spin configurations without any preprocessing are required for the q-state clock model, while only the vortex configurations made from the spin configurations are required for the XXZ model. Note that some of the previous methods use vortex configurations, histograms of spin orientations, and spin correlation functions for input data, which are made from spin configurations via the feature engineering process. Among these, the vortex configuration may be the most natural feature for BKT transitions associated with binding and unbinding of vortices and antivortices. This quantity can be calculated from the spin-configuration data very easily.
The machine learning methods proposed in this article have the following two advantages over the Monte Carlo methods. First, they require much less computational cost. In the Monte Carlo methods, a large number of state-updates must be done to bring the system to thermal equilibrium. Furthermore, a large number of samplings is also required to accurately calculate the thermal averages of physical quantities. On the contrary, the machine learning methods do not require such heavy computational procedures. Certainly, it is necessary to generate spin configurations to prepare the input data using the Monte Carlo thermalization technique. However, they are not required to reach real thermal equilibrium in contrast to the Monte Carlo methods, but spin configurations on the way to the thermal equilibrium can also work as input data because they already contain information and features of phases and phase transitions. This enables us to reduce the computational time significantly. In addition, the training of neural network, which dominates the computational time of machine learning methods, takes typically one hour at most. This time scale is much shorter than that of the Monte Carlo methods, which is typically a few ten hours or even a few days.
Second, the machine learning methods has advantages with respect to the scalability and generalizability. In the Monte Carlo methods, several physical quantities are calculated to determine the transition temperatures. Some of the physical quantities do not have general formula to be used for the Monte Carlo calculations, and we need to derive its specific expression for the target model. The helicity modulus used to detect the BKT transition in continuous spin systems is a typical example of such quantities. In such cases, it is time-consuming to derive the expression for each model. The derivation of its model-dependent formula is very difficult especially for complex spin models for real materials which contain magnetic anisotropies, further-neighbor exchange interactions and/or higher-order interactions. This problem hinders systematic research and exploration of phase transitions in various materials described by different spin models. The methods discussed in this article, on the other hand, do not require such derivation procedures. Once the neural network is trained by spin or vortex configurations of a well-known model, it can be applied to other spin models with excellent scalability and generalizability.
On the other hand, there are some disadvantages. First, it is difficult to systematically improve accuracy of the evaluations. We have demonstrated that both the PC and TI methods can obtain the transition temperatures in good agreement with those obtained by the Monte Carlo method. However, it is nontrivial how to make the value as close as possible to the true values. In the Monte Carlo methods, the statistical errors can be suppressed by increasing the number of samplings. On the contrary, it is not the case for the machine learning methods. Systematic methodology to improve accuracy has not been established at present. This is because there are many hyperparameters intricately entangled with each other in neural networks such as the number of training data, the number of training steps, the number of hidden layers, the number of nodes in each layer, choices of the activation functions etc.
Second, it is difficult to discern phase transitions from a heat map even if it is analyzed with the correlation function
As discussed above, the PC method and the TI method have both advantages and disadvantages over the Monte Carlo method as a conventional method based on statistical mechanics. Currently, the machine learning method and the Monte Carlo method are complementary with each other. To realize the fully machine-learning-based research of physics in the future, it is necessary to clarify what the neural network sees and what the machine learning can do through repeating trials and accumulating experiences.
In this article, we have discussed recently proposed machine learning methods named PC method and TI method for detection of the BKT transitions in some classical spin models. The BKT transition is one of the typical topological phase transitions, which is not accompanied by spontaneous symmetry breaking in contrast to conventional phase transitions within Landau's scheme. The BKT transition is difficult to detect not only by conventional methods based on statistical mechanics such as the Monte Carlo method but also by simple machine learning methods. The PC method is a method to detect phase transitions in a target model by using a neural network trained with spin or vortex configurations of other well-known models so as to classify the phases correctly. On the other hand, the TI method is a method to detect phase transitions by analyzing the weight matrix connecting the last hidden layer and the output layer of the optimized neural network trained so as to correctly identify a temperature at which a given spin or vortex configuration is generated by the Monte Carlo thermalization technique. It has been demonstrated that these methods can detect both the second-order phase transitions and the BKT transitions successfully.
Several machine learning methods for detection of the BKT transitions have been proposed and their demonstrations have been reported so far, but most of them are based on supervised learning techniques. The two methods discussed in this article have several advantages over them. One advantage is that the methods require no or less information of the model in advance. Another advantage is the least requirement of the feature engineering. These characteristics are suitable for systematic exploration of novel physics and phenomena in unknown spin models. The era that machine learning techniques are used for research of physics in earnest is approaching. We hope this article can contribute, even partially, to the development of this research field.
Acknowledgment
We are grateful to Yusuke Murata and Yasuhiro Tanaka for useful discussions. This work was supported by Japan Society for the Promotion of Science KAKENHI (Grants Nos. JP20H00337, JP23H04522, and JP24H02231), CREST, the Japan Science and Technology Agency (Grant No. JPMJCR20T1), Waseda University Grant for Special Research Projects (2023E-026, 2023C-140, 2024C-153, and 2024C-155), and Waseda Research Institute for Science and Engineering, Grant-in-Aid for Young Scientists (Early Bird).
References
- 1 G. De las Cuevas and T. S. Cubitt, Science 351, 1180 (2016). 10.1126/science.aab3326 Crossref, Google Scholar
- 2 S. Stengele, D. Drexel, and G. De las Cuevas, Proc. R. Soc. A 479, 20220553 (2023). 10.1098/rspa.2022.0553 Crossref, Google Scholar
- 3 V. L. Berezinskii, Sov. Phys. JETP 32, 493 (1971). Google Scholar
- 4 V. L. Berezinskii, Sov. Phys. JETP 34, 610 (1972). Google Scholar
- 5 J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973). 10.1088/0022-3719/6/7/010 Crossref, Google Scholar
- 6 J. M. Kosterlitz, J. Phys. C 7, 1046 (1974). 10.1088/0022-3719/7/6/005 Crossref, Google Scholar
- 7 Y. Miyajima, Y. Murata, Y. Tanaka, and M. Mochizuki, Phys. Rev. B 104, 075114 (2021). 10.1103/PhysRevB.104.075114 Crossref, Google Scholar
- 8 Y. Miyajima and M. Mochizuki, Phys. Rev. B 107, 134420 (2023). 10.1103/PhysRevB.107.134420 Crossref, Google Scholar
- 9 N. D. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 (1966). 10.1103/PhysRevLett.17.1133 Crossref, Google Scholar
- 10 T. Ohtsuki and T. Mano, J. Phys. Soc. Jpn. 89, 022001 (2020). 10.7566/JPSJ.89.022001 Link, Google Scholar
- 11 M. Richter-Laskowska, H. Khan, N. Trivedi, and M. M. Maśka, Condens. Matter Phys. 21, 33602 (2018). 10.5488/CMP.21.33602 Crossref, Google Scholar
- 12 M. J. S. Beach, A. Golubeva, and R. G. Melko, Phys. Rev. B 97, 045207 (2018). 10.1103/PhysRevB.97.045207 Crossref, Google Scholar
- 13 W. Zhang, J. Liu, and T. C. Wei, Phys. Rev. E 99, 032142 (2019). 10.1103/PhysRevE.99.032142 Crossref, Google Scholar
- 14 J. F. Rodriguez-Nieva and M. S. Scheurer, Nat. Phys. 15, 790 (2019). 10.1038/s41567-019-0512-x Crossref, Google Scholar
- 15 K. Shiina, H. Mori, Y. Okabe, and H. K. Lee, Sci. Rep. 10, 2177 (2020). 10.1038/s41598-020-58263-5 Crossref, Google Scholar
- 16 Y. Tomita, K. Shiina, Y. Okabe, and H. K. Lee, Phys. Rev. E 102, 021302(R) (2020). 10.1103/PhysRevE.102.021302 Crossref, Google Scholar
- 17 Q. H. Tran, M. Chen, and Y. Hasegawa, Phys. Rev. E 103, 052127 (2021). 10.1103/PhysRevE.103.052127 Crossref, Google Scholar
- 18 J. Wang, W. Zhang, T. Hua, and T. C. Wei, Phys. Rev. Res. 3, 013074 (2021). 10.1103/PhysRevResearch.3.013074 Crossref, Google Scholar
- 19 T. Mendes-Santos, X. Turkeshi, M. Dalmonte, and A. Rodriguez, Phys. Rev. X 11, 011040 (2021). 10.1103/PhysRevX.11.011040 Crossref, Google Scholar
- 20 J. Singh, M. S. Scheurer, and V. Arora, SciPost Phys. 11, 043 (2021). 10.21468/SciPostPhys.11.2.043 Crossref, Google Scholar
- 21 S. Haldar, S. S. Rahaman, and M. Kumar, arXiv:2205.15151. Google Scholar
- 22 A. Tanaka and A. Tomiya, J. Phys. Soc. Jpn. 86, 063001 (2017). 10.7566/JPSJ.86.063001 Link, Google Scholar
- 23 J. Carrasquilla and R. G. Melko, Nat. Phys. 13, 431 (2017). 10.1038/nphys4035 Crossref, Google Scholar
- 24 S. Arai, M. Ohzeki, and K. Tanaka, J. Phys. Soc. Jpn. 87, 033001 (2018). 10.7566/JPSJ.87.033001 Link, Google Scholar
- 25 P. Suchsland and S. Wessel, Phys. Rev. B 97, 174435 (2018). 10.1103/PhysRevB.97.174435 Crossref, Google Scholar
- 26 W. Hu, R. R. P. Singh, and R. T. Scalettar, Phys. Rev. E 95, 062122 (2017). 10.1103/PhysRevE.95.062122 Crossref, Google Scholar
- 27 S. J. Wetzel, Phys. Rev. E 96, 022140 (2017). 10.1103/PhysRevE.96.022140 Crossref, Google Scholar
- 28 L. Wang, Phys. Rev. B 94, 195105 (2016). 10.1103/PhysRevB.94.195105 Crossref, Google Scholar
- 29 P. Ponte and R. G. Melko, Phys. Rev. B 96, 205146 (2017). 10.1103/PhysRevB.96.205146 Crossref, Google Scholar
- 30 E. P. L. van Nieuwenburg, Y. H. Liu, and S. D. Huber, Nat. Phys. 13, 435 (2017). 10.1038/nphys4037 Crossref, Google Scholar
- 31 Y. H. Liu and E. P. L. van Nieuwenburg, Phys. Rev. Lett. 120, 176401 (2018). 10.1103/PhysRevLett.120.176401 Crossref, Google Scholar
- 32 C. Giannetti, B. Lucini, and D. Vadacchino, Nucl. Phys. B 944, 114639 (2019). 10.1016/j.nuclphysb.2019.114639 Crossref, Google Scholar
- 33 D. Bachtis, G. Aarts, and B. Lucini, Phys. Rev. E 102, 033303 (2020). 10.1103/PhysRevE.102.033303 Crossref, Google Scholar
- 34 A. Cuccoli, T. Roscilde, R. Vaia, and P. Verrucchi, Phys. Rev. Lett. 90, 167205 (2003). 10.1103/PhysRevLett.90.167205 Crossref, Google Scholar
- 35 A. Cuccoli, T. Roscilde, R. Vaia, and P. Verrucchi, Phys. Rev. B 68, 060402 (2003). 10.1103/PhysRevB.68.060402 Crossref, Google Scholar
- 36 Z. Hu, Z. Ma, Y.-D. Liao, H. Li, C. Ma, Y. Cui, Y. Shangguan, Z. Huang, Y. Qi, W. Li, Z. Y. Meng, J. Wen, and W. Yu, Nat. Commun. 11, 5631 (2020). 10.1038/s41467-020-19380-x Crossref, Google Scholar
- 37 U. Tutsch, B. Wolf, S. Wessel, L. Postulka, Y. Tsui, H. O. Jeschke, I. Opahle, T. Saha-Dasgupta, R. Valentí, A. Brühl, K. Remović-Langer, T. Kretz, H.-W. Lerner, M. Wagner, and M. Lang, Nat. Commun. 5, 5169 (2014). 10.1038/ncomms6169 Crossref, Google Scholar
- 38 Y. Kohama, M. Jaime, O. E. Ayala-Valenzuela, R. D. McDonald, E. D. Mun, J. F. Corbey, and J. L. Manson, Phys. Rev. B 84, 184402 (2011). 10.1103/PhysRevB.84.184402 Crossref, Google Scholar
- 39 M. E. Fisher, M. N. Barber, and D. Jasnow, Phys. Rev. A 8, 1111 (1973). 10.1103/PhysRevA.8.1111 Crossref, Google Scholar
- 40 D. R. Nelson and J. M. Kosterlitz, Phys. Rev. Lett. 39, 1201 (1977). 10.1103/PhysRevLett.39.1201 Crossref, Google Scholar
- 41 H. Weber and P. Minnhagen, Phys. Rev. B 37, 5986 (1988). 10.1103/PhysRevB.37.5986 Crossref, Google Scholar
- 42 J. V. José, L. P. Kadanoff, S. Kirkpatrick, and D. R. Nelson, Phys. Rev. B 16, 1217 (1977). 10.1103/PhysRevB.16.1217 Crossref, Google Scholar
- 43 S. Elitzur, R. B. Pearson, and J. Shigemitsu, Phys. Rev. D 19, 3698 (1979). 10.1103/PhysRevD.19.3698 Crossref, Google Scholar
- 44 J. Tobochnik, Phys. Rev. B 26, 6201 (1982). 10.1103/PhysRevB.26.6201 Crossref, Google Scholar
- 45 Y. Kumano, K. Hukushima, Y. Tomita, and M. Oshikawa, Phys. Rev. B 88, 104427 (2013). 10.1103/PhysRevB.88.104427 Crossref, Google Scholar
- 46 S. Hikami and T. Tsuneto, Prog. Theor. Phys. 63, 387 (1980). 10.1143/PTP.63.387 Crossref, Google Scholar
- 47 C. Kawabata and A. R. Bishop, Solid State Commun. 42, 595 (1982). 10.1016/0038-1098(82)90616-0 Crossref, Google Scholar
- 48 P. A. Serena, N. García, and A. Levanyuk, Phys. Rev. B 47, 5027 (1993). 10.1103/PhysRevB.47.5027 Crossref, Google Scholar
- 49 A. Cuccoli, V. Tognetti, and R. Vaia, Phys. Rev. B 52, 10221 (1995). 10.1103/PhysRevB.52.10221 Crossref, Google Scholar
- 50 A. S. T. Pires, Phys. Rev. B 54, 6081 (1996). 10.1103/PhysRevB.54.6081 Crossref, Google Scholar
- 51 K. W. Lee and C. E. Lee, Phys. Rev. B 72, 054439 (2005). 10.1103/PhysRevB.72.054439 Crossref, Google Scholar
- 52 K. Aoyama and H. Kawamura, Phys. Rev. B 100, 144416 (2019). 10.1103/PhysRevB.100.144416 Crossref, Google Scholar
- 53 A. A. Shirinyan, V. K. Kozin, J. Hellsvik, M. Pereiro, O. Eriksson, and D. Yudin, Phys. Rev. B 99, 041108(R) (2019). 10.1103/PhysRevB.99.041108 Crossref, Google Scholar
- 54 F. Chollet et al., KERAS (2015), https://github.com/keras-team/keras. Google Scholar
- 55 D. Kim and D.-H. Kim, Phys. Rev. E 98, 022138 (2018). 10.1103/PhysRevE.98.022138 Crossref, Google Scholar
- 56 D. Bachtis, G. Aarts, and B. Lucini, Phys. Rev. E 102, 053306 (2020). 10.1103/PhysRevE.102.053306 Crossref, Google Scholar
- 57 K. Fukushima and K. Sakai, Prog. Theor. Exp. Phys. 2021, 061A01 (2021). 10.1093/ptep/ptab057 Crossref, Google Scholar
- 58 L. Onsager, Phys. Rev. 65, 117 (1944). 10.1103/PhysRev.65.117 Crossref, Google Scholar
- 59 S. G. Brush, Rev. Mod. Phys. 39, 883 (1967). 10.1103/RevModPhys.39.883 Crossref, Google Scholar
- 60 Y. Tomita and Y. Okabe, Phys. Rev. B 65, 184405 (2002). 10.1103/PhysRevB.65.184405 Crossref, Google Scholar
- 61 A. F. Brito, J. A. Redinz, and J. A. Plascak, Phys. Rev. E 81, 031130 (2010). 10.1103/PhysRevE.81.031130 Crossref, Google Scholar
Author Biographies

Masahito Mochizuki is a Professor at Waseda University in Japan. He received his Ph.D. from the University of Tokyo in 2003. After a postdoc at the University of Tokyo, RIKEN, and JST-ERATO multiferroics project, he became a Lecturer at the University of Tokyo in 2009 and an Associate Professor at Aoyama Gakuin University in 2013 before moving to Waseda University in 2017. His research interests are theories of strongly correlated electron systems, multiferroics, spintronics, topological magnetisms, and photoinduced nonequilibrium phenomena.

Yusuke Miyajima was born in Tokyo, Japan in 1998. He is a Ph.D. student at the Department of Applied Physics, Waseda University. He is working on the application of machine learning to physics and the nature-inspired computational technologies.