L-1: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
m (Reverted edits by Rosso (talk) to last revision by Ros)
Tags: Rollback Reverted
(Undo revision 2769 by Rosso (talk))
Tag: Undo
Line 1: Line 1:
<strong>Goal: </strong> Spin glass transition. From the experiments with the anomaly on the magnetic susceptibility to order parameter of the transition. We will discuss the arguments linked to  extreme value statistics
= Spin glass Transition =
= Spin glass Transition =


== Experiments ==


Parlare dei campioni di rame dopati con il magnesio, marino o no: trovare due figure una  di suscettivita e una di calore specifico, prova della transizione termodinamica.
 
Spin glass behavior was first observed in experiments with non-magnetic metals (such as Cu, Fe, Au, etc.) doped with a small percentage of magnetic impurities, typically Mn. At low doping levels, the magnetic moments of Mn atoms interact via the Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction. This interaction has a random sign due to the random spatial distribution of Mn atoms within the non-magnetic metal. A freezing temperature, <math>T_f</math>, separates the high-temperature paramagnetic phase from the low-temperature spin glass phase:
*Above <math>T_f</math>: The magnetic susceptibility follows the standard Curie law, <math>\chi(T) \sim 1/T</math>.
* Below <math>T_f</math>: Strong metastability emerges, leading to differences between the field-cooled (FC) and zero-field-cooled (ZFC) protocols:
 
(i) In the ZFC protocol, the susceptibility decreases with decreasing temperature, <math>T</math>.
 
(ii)In the FC protocol, the susceptibility freezes at <math>T_f</math>, remaining constant at <math>\chi_{FC}(T<T_f) = \chi(T_f)</math>.
 
Understanding whether these data reveal a true thermodynamic transition and determining the nature of this new "glassy" phase remains an open challenge to this day. However, in the early 1980s, spin glass models were successfully solved within the mean-field approximation. In this limit, it is possible to determine the phase diagram and demonstrate the existence of a glassy phase where the entropy vanishes at a finite temperature. Furthermore, a condensation of the Gibbs measure onto a few configurations is observed.


==Edwards Anderson model==
==Edwards Anderson model==
The first significant theoretical attempt to describe spin glasses is the Edwards-Anderson model. For simplicity, we will consider the Ising version of this model.


We consider for simplicity the Ising version of this model.
Ising spins take two values, <math>\sigma_i = \pm 1</math>, and are located on a lattice with <math>N</math> sites, indexed by <math>i = 1, 2, \ldots, N</math>.
The energy of the system is expressed as a sum over nearest neighbors <math>\langle i, j \rangle</math>:
<center><math> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j. </math></center>


Ising spins takes two values <math>\sigma=\pm 1</math> and live on a lattice of <math>N </math> sites <math> i=1,2,\ldots,N </math>.
Edwards and Anderson proposed studying this model with couplings <math>J_{ij}</math> that are independent and identically distributed (i.i.d.) random variables with a zero mean.
The enregy is written as a sum between the nearest neighbours <i,j>:
The coupling distribution is denoted by <math>\pi(J)</math>, and the average over the couplings, referred to as the disorder average, is indicated by an overline:
<center> <math>
<center><math> \overline{J} \equiv \int dJ \, J \, \pi(J) = 0. </math></center>
E= - \sum_{ <i, j> } J_{ij} \sigma_i \sigma_j
 
</math></center>
We will consider two specific coupling distributions:
Edwards and Anderson proposed to study this model for couplings <math>J </math> that are i.i.d. random variables with '''zero mean'''.
* Gaussian couplings: <math>\pi(J) = \exp\left(-J^2 / 2\right) / \sqrt{2 \pi}</math>.
We set <math>\pi(J)</math> the coupling distribution and we indicate the average over the couplings, called disorder average, with an overline:  
* Coin-toss couplings: <math>J = \pm 1</math>, chosen with equal probability <math>1/2</math>.
<center><math>
\overline{J} \equiv \int d J \, J \, \pi(J)=0  
</math></center>
It is crucial to assume <math>
\bar{ J}=0 </math>, otherwise the model displays ferro/antiferro order. We will discuss two distributions:
* Gaussian couplings: <math> \pi(J) =\exp\left(-J^2/2\right)/\sqrt{2 \pi}</math>
* Coin toss couplings, <math>J= \pm 1 </math>, selected  with probability <math>1/2 </math>.


== Edwards Anderson order parameter==
== Edwards Anderson order parameter==
Since <math>\overline{J} = 0</math>, the model does not exhibit spatial magnetic order, such as ferromagnetic or antiferromagnetic order. Instead, the idea is to distinguish between two phases:
* Paramagnetic phase: Configurations are explored with all possible spin orientations.
* Spin glass phase: Spin orientations are random but frozen (i.e., immobile).
The glass phase is characterized by long-range correlations in time, despite the absence of long-range correlations in space. The order parameter for this phase is:
<center><math> q_{EA} = \lim_{t \to \infty} \lim_{N \to \infty} \frac{1}{N} \sum_{i} \sigma_i(0) \sigma_i(t), </math></center> where <math>q_{EA}</math> measures the overlap of the spin configuration with itself after a long time.
In the paramagnetic phase, <math>q_{EA} = 0</math>, while in the spin glass phase, <math>q_{EA} > 0</math>.
This raises the question of whether the transition at <math>T_f</math> is truly thermodynamic in nature. Indeed, in the definition of the Edwards-Anderson (EA) parameter, time seems to play a role, and the magnetic susceptibility does not diverge at the freezing temperature <math>T_f</math>.
The divergence of the magnetic susceptibility in ferromagnets is due to the fact that the magnetization <math>M = \sum_i \sigma_i</math> serves as the order parameter, distinguishing the ordered and disordered phases. However, in the spin glass model, magnetization is zero in both phases and the order parameter  is <math>q_{EA}</math>.
It can be shown that the associated susceptibility corresponds to the nonlinear susceptibility:
<center><math> \frac{M}{H} = \chi + a_3 H^2 + a_5 H^4 + \ldots </math></center> where <math>\chi</math> is the linear susceptibility, and <math>a_3, a_5, \ldots</math> are higher-order coefficients. Experiments have demonstrated that <math>a_3</math> and <math>a_5</math> exhibit singular behavior, providing experimental evidence for the existence of a thermodynamic transition at <math>T_f</math>.


== The SK model ==
== The SK model ==
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
<center> <math>
<center> <math>
  E= - \sum_{i,j} \frac{J_{ij}}{2 \sqrt{N}} \sigma_i \sigma_j
  E= - \sum_{i,j} \frac{J_{ij}}{ \sqrt{N}} \sigma_i \sigma_j
</math></center>
</math></center>
At the inverse temperature <math>
At the inverse temperature <math>
Line 37: Line 60:
</math></center>
</math></center>
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.
This model presents a thermodynamic transition at <math>
This model presents a thermodynamic transition.
\beta_c=?? </math>.


== Random energy model ==
= Random energy model =


The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida.  
The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida.  
This model neglects the correlations between the <math> M=2^N </math> configurations and assumes the <math> E_{\alpha} </math> as iid variables.
This model neglects the correlations between the <math> M=2^N </math> configurations and assumes the <math> E_{\alpha} </math> as iid variables.
* Show that the energy distribution is  
* Show that the energy distribution is  
<center><math> p(E_\alpha) =\frac{1}{\sqrt{2 \pi \sigma^2}}e^{-\frac{E_{\alpha}^2}{2 \sigma^2}}</math></center>
<center><math> p(E_\alpha) =\frac{1}{\sqrt{2 \pi \sigma_M^2}}e^{-\frac{E_{\alpha}^2}{2 \sigma_M^2}}</math></center>
and determine <math>\sigma^2</math>
and determine <math>\sigma_M^2= N= \log M/\log 2</math>
 
 
 
We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the <math>M=2^N</math> configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.
 
 
 
== Extreme Value Statistics ==
 
Consider the <math>M = 2^N</math> energies <math>E_1, \dots, E_M</math> as independent and identically distributed (i.i.d.) random variables drawn from a distribution <math>p(E)</math> (Gaussian in the case of the REM). It is useful to introduce the cumulative probability of finding an energy smaller than ''E'':
 
<center> <math>P^<(E) = \int_{-\infty}^E dx \, p(x)</math> </center>


The complementary probability of finding an energy larger than ''E'' is:
<center> <math>P^>(E) = \int_E^{+\infty} dx \, p(x) = 1 - P^<(E)</math> </center>


We define:
<center> <math>E_{\min} = \min(E_1, \dots, E_M)</math> </center>


We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the <math>M=2^N</math> configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.
Our goal is to compute the cumulative distribution:
<center> <math>Q_M(\epsilon) \equiv \text{Prob}(E_{\min} > \epsilon)</math> </center>


==  Extreme value statistics  ==
for large ''M''. To achieve this, we rely on three key relations:


Consider the  <math>M=2^N</math>  energies: <math>E_1,...,E_M</math> as iid variables, drawn from the distribution <math>p(E)</math> (Gaussian, but we remain general in this section). It is useful to  use the following notations:
* *First relation (exact)*:
* <math>P^<(E)=\int_{-\infty}^E dx p(x)  </math>, it  is the probability to find an energy smaller than ''E''.
    <center> <math>Q_M(\epsilon) = \left(P^>(\epsilon)\right)^M</math> </center>
* <math> P^>(E)=\int_E^{+\infty} dx p(x) = 1- P^<(E) </math>, it is the probability to find an energy  larger than ''E''.


We denote
* *Second relation (estimate)*: The typical value of the minimum energy, <math>E_{\min}^{\text{typ}}</math>, satisfies:
<center><math>E_{\min}=\min(E_1,...,E_M)</math></center>
    <center> <math>P^<(E_{\min}^{\text{typ}}) = \frac{1}{M}</math> </center>
Our goal is to compute the cumulative distribution  <math>Q_M(\epsilon)\equiv\text{Prob}(E_{\min}> \epsilon)</math> for large ''M''. To achieve this we need  two key relations:
* The first relation is exact:
<center><math>Q_M(\epsilon) = \left(P^>(\epsilon)\right)^M </math> </center>
* The second relation identifies the typical value of the minimum, namely <math> a_M  </math>:
<center><math>P^<(a_M) = \frac1 M </math> </center>.


Let us consider  the limit,  <math>\lim_{M\to \infty} (1-\frac{k}{M})^M =\exp(-k)</math>, which allow to  re-write the first relation:
* *Third relation (approximation)*: For <math>M \to \infty</math>, we have:
<center><math>Q_M(\epsilon) \sim \exp\left(-M  P^<(\epsilon)\right)</math> </center>
    <center> <math>Q_M(\epsilon) = e^{M \log(1 - P^<(\epsilon))} \sim \exp\left(-M P^<(\epsilon)\right)</math> </center>
This relation holds only when <math> \epsilon \approx a_M </math> and one hase to expand around this value.
However, a direct Taylor  expansion does not ensures that probabilities remain positive. Hence, we define <math>  P^<(\epsilon)=\exp(-A(\epsilon)) </math> and remark that <math>  A(\epsilon) </math>  is a decreasing function. We propose the following Taylor expansion 
<center><math>  A(\epsilon) =a_M + A'(a_M)(\epsilon -a_M) = a_M - y_N(\epsilon -a_M) </math></center>
Depending on the distribution <math>p(E)</math> we have a different dependence on ''N'' or ''M'' of both <math>a_M, y_N </math>. It is convenient to define
<center><math>  y_N \propto N^{-\omega} </math></center>
We will see that three different scenarios occur depending on the sign of  <math>  \omega  </math>. Using this expansion we derive:
* The famous Gumbel distribution:
<center><math>Q_M(\epsilon) \sim \exp\left(-e^{ y_N (\epsilon-a_M)}\right) </math> </center>
* the  typical fluctuations of the minimum <math> \sim 1/y_N</math>. In particular we can write:
<center><math> \overline{ \left(E_{\min} -  \overline {E_{\min}}\right)^2 }\propto N^{2\omega}</math></center>


=== Gaussian Case and Beyond ===
For a Gaussian distribution, the asymptotic tail of <math>P^<(E)</math> as <math>E \to -\infty</math> is:
<center> <math>P^<(E) = \int_{-\infty}^E dx \, p(x) \sim \frac{\sigma}{\sqrt{2 \pi} |E|} e^{-\frac{E^2}{2 \sigma^2}}</math> </center>
Thus, the typical value of the minimum energy is:
<center> <math>E_{\min}^{\text{typ}} = -\sigma \sqrt{2 \log M} + \frac{1}{2} \sqrt{\log(\log M)} + O(1)</math> </center>
Let us to be more general and consider tails
<center><math>P^<(E) \sim e^{-\frac{|E|^\gamma}{2 \sigma_M^2}} \; . </math></center>
In the spirit of the central limit theorem we are looking for a scaling form  <math>E_{\min}=a_M + b_M z </math>. The  constants <math>a_M, b_M</math> are M-dependent while <math>z</math> is a random variable of order one drawa from the M-independent distribution <math>P(z)</math>.
Shows that
* at the leading order <math>a_M= -\left[2 \sigma^2_M \log M\right]^{1/\gamma}</math>
* <math>b_M= \frac{2 \sigma^2_M}{\gamma |a_M|^{\gamma-1}} </math>
* <math>P(z) = e^{z} e^{-e^{z}}</math> which is the Gumbel distribution
=== Ground State Fluctuations ===
Depending on the distribution <math>p(E)</math>, we observe different dependencies of ''M'' for <math>a_M</math> and <math>b_M</math>. To emphasize the ''N'' dependence, we define:
<center> <math>b_M \equiv 1/y_N \propto N^{\omega}</math> </center>
Note that the typical fluctuations of the minimum are <math>\sim 1/y_N</math>. Specifically, we can write:
<center> <math>\overline{\left(E_{\min} - \overline{E_{\min}}\right)^2} \propto N^{2\omega}</math> </center>
We will see that three distinct scenarios emerge depending on the sign of <math>\omega</math>.


===Density of states above the minimum===
===Density of states above the minimum===


 
For a given disorder realization, we compute <math> n(x) </math>, the number of configurations above the minimum with an energy smaller than <math> E_{\min}+x</math>. The key relation for this quantity is:
For a given disorder realization, we compute <math> d(x) </math>, the number of configurations above the minimum with an energy smaller than <math> E_{\min}+x</math>. The key relation for this quantity is:
<center><math>  \text{Prob}[d(x) = k] = M \binom{M-1}{k}\int_{-\infty}^\infty dE \; p(E) [P^>(E) -  P^>(E+x)  ]^{k} P^>(E+x)^{M - k - 1}
<center><math>  \text{Prob}(d(x) = k) = M \binom{M-1}{k}\int dE \; p(E) [P^>(E) -  P^>(E+x)  ]^{k} P^>(E+x)^{M - k - 1}
     </math></center>
     </math></center>
Taking the average  <math> \overline{d(x)} = \sum_k k \text{Prob}(d(x) = k) </math>, we derive
Taking the average, we get <math> \overline{n(x)} = \sum_k k   \,  \text{Prob}[d(x) = k] </math>. We use the following identity
<center><math>
<center><math>
    \overline{d(x)} = M (M-1) \int dE \; p(E) \left[P^>(E) - P^>(E+x) \right] P^>(E)^{M-2}
\sum_{k=0}^{M-1} k \binom{M-1}{k}  (A-B)^k B^{M-1-k} = (A-B)\frac{d}{d A} \sum_{k=0}^{M-1} \binom{M-1}{k}  (A-B)^k    B^{M-1-k}= (M-1)(A-B)A^{M-2}
</math></center>
</math></center>
In the above integral, <math> E </math> is the energy of the minimum. Hence, we can use the Taylor expansion <math> A(E) = a_M -y_N (E -a_M)</math>. In particular we can write
we arrive to the final form
* <math>  p(E) = \frac{d}{d E} P^<(E)= -A'(E) e^{-A(E)} \sim y_N e^{y_N (E -a_M)} /M</math>
<center><math>
* <math> P^>(E) -  P^>(E+x)  = e^{-A(E+x)}-e^{-A(E)}\sim  e^{y_N (E -a_M)} \left(e^{y_N x}-1\right)/M </math>
\overline{n(x)} = M (M-1) \int  dE \; p(E) \left[P^>(E) -  P^>(E+x)  \right] P^>(E)^{M-2}   
* <math> P^>(E)^{M-2}= Q_{M-2} (E) \sim \exp\left(-e^{ y_N (E-a_M)}\right) </math>
</math></center>
Calling <math>u=y_N (E -a_M) </math> we obtain
Replace <math>E=a_M + b_M z </math> and obtain
<center><math>  \overline{d(x)} =  \left(e^{y_N x}-1\right) \int_{-\infty}^{\infty}  du   e^{2 u -e^{u} } = \left(e^{y_N x}-1\right)\quad \text{with} \; y_N \sim N^{-\omega}
<center><math>  \overline{n(x)} =  \left(e^{y_N x}-1\right) \int_{-\infty}^{\infty}  dz   e^{2 z -e^{z} } = \left(e^{y_N x}-1\right)\quad \text{with} \; y_N \sim N^{-\omega}
     </math></center>
     </math></center>


==The Glass phase==
=== Density of States Above the Minimum ===
 
For a given disorder realization, we compute <math>n(x)</math>, the number of configurations above the minimum with energy smaller than <math>E_{\min} + x</math>. The key relation for this quantity is:
<center> <math> \text{Prob}[d(x) = k] = M \binom{M-1}{k} \int_{-\infty}^\infty dE \; p(E) \left[P^>(E) - P^>(E+x)\right]^k P^>(E+x)^{M - k - 1} </math> </center>
 
Taking the average, we get:
<center> <math> \overline{n(x)} = \sum_k k \, \text{Prob}[d(x) = k] </math> </center>


In the Glass phase the measure is concentrated in few configurations which has a finite occupation probability while in the paramagnetic phase the occupation probability is <math>\sim 1/M </math>. As a consequence the entropy is extensive in the paramagnetic phase and sub-extensive in the glass phase. It is useful to evaluate the occupation probability of the ground state in the infinite system:
We use the identity:
<center> <math> \sum_{k=0}^{M-1} k \binom{M-1}{k} (A-B)^k B^{M-1-k} = (A-B) \frac{d}{dA} \sum_{k=0}^{M-1} \binom{M-1}{k} (A-B)^k B^{M-1-k} = (M-1)(A-B)A^{M-2} </math> </center>


<center><math>   \frac{z_{\alpha_{\min}}}{\sum_{\alpha=1}^M z_\alpha}= \frac{1}{1+\sum_{\alpha\ne \alpha_{\min}} z_\alpha}\sim \frac{1}{1+\int_0^\infty dx\, e^{-\beta x\left(e^{y_N x}-1\right) }
This leads to the final form:
    </math></center>
<center> <math> \overline{n(x)} = M (M-1) \int dE \; p(E) \left[P^>(E) - P^>(E+x)\right] P^>(E)^{M-2} </math> </center>


* In the high temperature phase, for <math> \beta<y_N </math>, the occupation probability is close to zero, meaning that the ground state is not deep enough to make the system glassy
Replacing <math>E = a_M + b_M z</math>, we obtain:
* In the low temperature phase, for <math> \beta>y_N </math>, the above integral is finite. Hence, setting <math> \beta=1/T, T_f=1/y_N</math> you can find
<center> <math> \overline{n(x)} = \left(e^{y_N x} - 1\right) \int_{-\infty}^\infty dz \, e^{2z - e^z} = \left(e^{y_N x} - 1\right) \quad \text{with} \; y_N \sim N^{-\omega} </math> </center>
<center><math>   \frac{z_{\alpha_{\min}}}{\sum_{\alpha=1}^M z_\alpha}= \frac{1}{1+ \frac{T^2}{T_f-T} } </math></center>
This means that below the freezing temperature, the ground state is occupied with a finite probability as in Bose-Einstein Condensation.


=Take home message=
Let us recall <math> y_N \sim N^{-\omega}</math>, so that three situations can occur
Let us recall <math> y_N \sim N^{-\omega}</math>, so that three situations can occur
*  For <math> \omega<0</math>, there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
*  For <math> \omega<0</math>, there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
* For  <math> \omega>0</math> there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent  <math> \omega</math> can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
* For  <math> \omega>0</math> there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent  <math> \omega</math> can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
* For  <math> \omega=0</math> there is for sure  a freezing transition. One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (but not extensive) number of low energy exctitations.
* For  <math> \omega=0</math> there is for sure  a freezing transition. For the Random Energy Model defined above <math>T_f=1/\sqrt{2 \log 2}</math> One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (yet not extensive) number of low energy exctitations.


== Exercise L1-A: the Gaussian case ==
<!-- == Exercise L1-A: More on extreme values ==
Specify these results to the Guassian case and find
For a large set of iid random variables there are only three scaling form for the distribution of the minimum. In this lecture we studied the Gumbel case for fast decaying functions. The other two possibilities are
<math>P^<(E)=\int_{-\infty}^E dx p(x)  \sim \frac{\sigma}{\sqrt{2 \pi}|E|}e^{-\frac{E^2}{2 \sigma^2}} \; </math> for <math>x \to -\infty</math>
* <strong>Frechet case: </strong> for a power law tail <math> p(E) \sim c/|E|^{\gamma+1} </math> with <math> \gamma>0 </math>
* the typical value of the minimum
* <strong>Weidbul case: </strong> for a bounded distribution with  <math> p(E) \sim c(E-E_0)^\gamma \; \text{when } E\to E_0^+ </math>
%<center><math>a_M = \sigma \sqrt{2 \log M}-\frac{1}{2}\sqrt{\log(\log M)} +O(1) </math> </center>
Compute <math>a_M, b_M</math> in both cases as well as the limiting distribution. -->
* The expression <math>   A(\epsilon) =\frac{\epsilon^2}{2\sigma^2} -\frac{\sqrt{2 \pi}}{\sigma} \log|\epsilon|+\ldots </math>
*The expression of the Gumbel distribution for the Gaussian case
<center><math>Q_M(\epsilon) \sim \exp\left(-e^{- \frac{\sqrt{2 \log M}}{\sigma} (\epsilon-a_M)}\right)  </math> </center>


=Bibliography=
=References=
* ''Theory of spin glasses'', S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965, 1975
* ''Spin glass i-vii'', P.W. Anderson,  Physics Today, 1988
*  ''Spin glasses: Experimental signatures and salient outcome'', E. Vincent and V. Dupuis, Frustrated Materials and Ferroic Glasses  31 (2018).
* ''Theory of spin glasses'', S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965 (1975).
* ''Non-linear susceptibility in spin glasses and disordered systems'', H. Bouchiat, Journal of Physics: Condensed Matter, 9, 1811 (1997).
* ''Solvable Model of a Spin-Glass'', D. Sherrington and S. Kirkpatrick, Physical Review Letters, 35, 1792 (1975).
*  ''Random-Energy Model: An Exactly Solvable Model of Disordered Systems'', B.Derrida,Physical Review B, 24, 2613 (1980).
*  ''Extreme value statistics of correlated random variables: a pedagogical review'', S. N. Majumdar, A. Pal, and G. Schehr, Physics Reports 840, 1-32, (2020).

Revision as of 20:03, 16 January 2025

Goal: Spin glass transition. From the experiments with the anomaly on the magnetic susceptibility to order parameter of the transition. We will discuss the arguments linked to extreme value statistics


Spin glass Transition

Spin glass behavior was first observed in experiments with non-magnetic metals (such as Cu, Fe, Au, etc.) doped with a small percentage of magnetic impurities, typically Mn. At low doping levels, the magnetic moments of Mn atoms interact via the Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction. This interaction has a random sign due to the random spatial distribution of Mn atoms within the non-magnetic metal. A freezing temperature, , separates the high-temperature paramagnetic phase from the low-temperature spin glass phase:

  • Above : The magnetic susceptibility follows the standard Curie law, .
  • Below : Strong metastability emerges, leading to differences between the field-cooled (FC) and zero-field-cooled (ZFC) protocols:

(i) In the ZFC protocol, the susceptibility decreases with decreasing temperature, .

(ii)In the FC protocol, the susceptibility freezes at , remaining constant at .

Understanding whether these data reveal a true thermodynamic transition and determining the nature of this new "glassy" phase remains an open challenge to this day. However, in the early 1980s, spin glass models were successfully solved within the mean-field approximation. In this limit, it is possible to determine the phase diagram and demonstrate the existence of a glassy phase where the entropy vanishes at a finite temperature. Furthermore, a condensation of the Gibbs measure onto a few configurations is observed.

Edwards Anderson model

The first significant theoretical attempt to describe spin glasses is the Edwards-Anderson model. For simplicity, we will consider the Ising version of this model.

Ising spins take two values, , and are located on a lattice with sites, indexed by . The energy of the system is expressed as a sum over nearest neighbors :

Edwards and Anderson proposed studying this model with couplings that are independent and identically distributed (i.i.d.) random variables with a zero mean. The coupling distribution is denoted by , and the average over the couplings, referred to as the disorder average, is indicated by an overline:

We will consider two specific coupling distributions:

  • Gaussian couplings: .
  • Coin-toss couplings: , chosen with equal probability .

Edwards Anderson order parameter

Since , the model does not exhibit spatial magnetic order, such as ferromagnetic or antiferromagnetic order. Instead, the idea is to distinguish between two phases:

  • Paramagnetic phase: Configurations are explored with all possible spin orientations.
  • Spin glass phase: Spin orientations are random but frozen (i.e., immobile).

The glass phase is characterized by long-range correlations in time, despite the absence of long-range correlations in space. The order parameter for this phase is:

where measures the overlap of the spin configuration with itself after a long time.

In the paramagnetic phase, , while in the spin glass phase, .

This raises the question of whether the transition at is truly thermodynamic in nature. Indeed, in the definition of the Edwards-Anderson (EA) parameter, time seems to play a role, and the magnetic susceptibility does not diverge at the freezing temperature . The divergence of the magnetic susceptibility in ferromagnets is due to the fact that the magnetization serves as the order parameter, distinguishing the ordered and disordered phases. However, in the spin glass model, magnetization is zero in both phases and the order parameter is .

It can be shown that the associated susceptibility corresponds to the nonlinear susceptibility:

where is the linear susceptibility, and are higher-order coefficients. Experiments have demonstrated that and exhibit singular behavior, providing experimental evidence for the existence of a thermodynamic transition at .

The SK model

Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:

At the inverse temperature , the partion function of the model is

Here is the energy associated to the configuration . This model presents a thermodynamic transition.

Random energy model

The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida. This model neglects the correlations between the configurations and assumes the as iid variables.

  • Show that the energy distribution is

and determine


We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.


Extreme Value Statistics

Consider the energies as independent and identically distributed (i.i.d.) random variables drawn from a distribution (Gaussian in the case of the REM). It is useful to introduce the cumulative probability of finding an energy smaller than E:

The complementary probability of finding an energy larger than E is:

We define:

Our goal is to compute the cumulative distribution:

for large M. To achieve this, we rely on three key relations:

  • *First relation (exact)*:
  • *Second relation (estimate)*: The typical value of the minimum energy, , satisfies:
  • *Third relation (approximation)*: For , we have:

Gaussian Case and Beyond

For a Gaussian distribution, the asymptotic tail of as is:

Thus, the typical value of the minimum energy is:

Let us to be more general and consider tails

In the spirit of the central limit theorem we are looking for a scaling form . The constants are M-dependent while is a random variable of order one drawa from the M-independent distribution . Shows that

  • at the leading order
  • which is the Gumbel distribution

Ground State Fluctuations

Depending on the distribution , we observe different dependencies of M for and . To emphasize the N dependence, we define:

Note that the typical fluctuations of the minimum are . Specifically, we can write:

We will see that three distinct scenarios emerge depending on the sign of .

Density of states above the minimum

For a given disorder realization, we compute , the number of configurations above the minimum with an energy smaller than . The key relation for this quantity is:

Taking the average, we get . We use the following identity

we arrive to the final form

Replace and obtain

Density of States Above the Minimum

For a given disorder realization, we compute , the number of configurations above the minimum with energy smaller than . The key relation for this quantity is:

Taking the average, we get:

We use the identity:

This leads to the final form:

Replacing , we obtain:

Take home message

Let us recall , so that three situations can occur

  • For , there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
  • For there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
  • For there is for sure a freezing transition. For the Random Energy Model defined above One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (yet not extensive) number of low energy exctitations.


References

  • Spin glass i-vii, P.W. Anderson, Physics Today, 1988
  • Spin glasses: Experimental signatures and salient outcome, E. Vincent and V. Dupuis, Frustrated Materials and Ferroic Glasses 31 (2018).
  • Theory of spin glasses, S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965 (1975).
  • Non-linear susceptibility in spin glasses and disordered systems, H. Bouchiat, Journal of Physics: Condensed Matter, 9, 1811 (1997).
  • Solvable Model of a Spin-Glass, D. Sherrington and S. Kirkpatrick, Physical Review Letters, 35, 1792 (1975).
  • Random-Energy Model: An Exactly Solvable Model of Disordered Systems, B.Derrida,Physical Review B, 24, 2613 (1980).
  • Extreme value statistics of correlated random variables: a pedagogical review, S. N. Majumdar, A. Pal, and G. Schehr, Physics Reports 840, 1-32, (2020).