L-1: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
m (Reverted edits by Rosso (talk) to last revision by Ros)
Tags: Rollback Reverted
Line 1: Line 1:
<strong>Goal: </strong> Spin glass transition. From the experiments with the anomaly on the magnetic susceptibility to order parameter of the transition. We will discuss the arguments linked to  extreme value statistics
= Spin glass Transition =
= Spin glass Transition =


== Experiments ==


 
Parlare dei campioni di rame dopati con il magnesio, marino o no: trovare due figure una  di suscettivita e una di calore specifico, prova della transizione termodinamica.
Spin glass behavior was first observed in experiments with non-magnetic metals (such as Cu, Fe, Au, etc.) doped with a small percentage of magnetic impurities, typically Mn. At low doping levels, the magnetic moments of Mn atoms interact via the Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction. This interaction has a random sign due to the random spatial distribution of Mn atoms within the non-magnetic metal. A freezing temperature, <math>T_f</math>, separates the high-temperature paramagnetic phase from the low-temperature spin glass phase:
*Above <math>T_f</math>: The magnetic susceptibility follows the standard Curie law, <math>\chi(T) \sim 1/T</math>.
* Below <math>T_f</math>: Strong metastability emerges, leading to differences between the field-cooled (FC) and zero-field-cooled (ZFC) protocols:
 
(i) In the ZFC protocol, the susceptibility decreases with decreasing temperature, <math>T</math>.
 
(ii)In the FC protocol, the susceptibility freezes at <math>T_f</math>, remaining constant at <math>\chi_{FC}(T<T_f) = \chi(T_f)</math>.
 
Understanding whether these data reveal a true thermodynamic transition and determining the nature of this new "glassy" phase remains an open challenge to this day. However, in the early 1980s, spin glass models were successfully solved within the mean-field approximation. In this limit, it is possible to determine the phase diagram and demonstrate the existence of a glassy phase where the entropy vanishes at a finite temperature. Furthermore, a condensation of the Gibbs measure onto a few configurations is observed.


==Edwards Anderson model==
==Edwards Anderson model==
The first significant theoretical attempt to describe spin glasses is the Edwards-Anderson model. For simplicity, we will consider the Ising version of this model.


Ising spins take two values, <math>\sigma_i = \pm 1</math>, and are located on a lattice with <math>N</math> sites, indexed by <math>i = 1, 2, \ldots, N</math>.
We consider for simplicity the Ising version of this model.
The energy of the system is expressed as a sum over nearest neighbors <math>\langle i, j \rangle</math>:
<center><math> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j. </math></center>


Edwards and Anderson proposed studying this model with couplings <math>J_{ij}</math> that are independent and identically distributed (i.i.d.) random variables with a zero mean.
Ising spins takes two values <math>\sigma=\pm 1</math> and live on a lattice of <math>N </math> sites <math> i=1,2,\ldots,N </math>.
The coupling distribution is denoted by <math>\pi(J)</math>, and the average over the couplings, referred to as the disorder average, is indicated by an overline:
The enregy is written as a sum between the nearest neighbours <i,j>:
<center><math> \overline{J} \equiv \int dJ \, J \, \pi(J) = 0. </math></center>
<center> <math>
 
E= - \sum_{ <i, j> } J_{ij} \sigma_i \sigma_j
We will consider two specific coupling distributions:
</math></center>
* Gaussian couplings: <math>\pi(J) = \exp\left(-J^2 / 2\right) / \sqrt{2 \pi}</math>.
Edwards and Anderson proposed to study this model for couplings <math>J </math> that are i.i.d. random variables with '''zero mean'''.
* Coin-toss couplings: <math>J = \pm 1</math>, chosen with equal probability <math>1/2</math>.
We set <math>\pi(J)</math> the coupling distribution and we indicate the average over the couplings, called disorder average, with an overline:  
<center><math>
\overline{J} \equiv \int d J \, J \, \pi(J)=0  
</math></center>
It is crucial to assume <math>
\bar{ J}=0 </math>, otherwise the model displays ferro/antiferro order. We will discuss two distributions:
* Gaussian couplings: <math> \pi(J) =\exp\left(-J^2/2\right)/\sqrt{2 \pi}</math>
* Coin toss couplings, <math>J= \pm 1 </math>, selected  with probability <math>1/2 </math>.


== Edwards Anderson order parameter==
== Edwards Anderson order parameter==
Since <math>\overline{J} = 0</math>, the model does not exhibit spatial magnetic order, such as ferromagnetic or antiferromagnetic order. Instead, the idea is to distinguish between two phases:
* Paramagnetic phase: Configurations are explored with all possible spin orientations.
* Spin glass phase: Spin orientations are random but frozen (i.e., immobile).
The glass phase is characterized by long-range correlations in time, despite the absence of long-range correlations in space. The order parameter for this phase is:
<center><math> q_{EA} = \lim_{t \to \infty} \lim_{N \to \infty} \frac{1}{N} \sum_{i} \sigma_i(0) \sigma_i(t), </math></center> where <math>q_{EA}</math> measures the overlap of the spin configuration with itself after a long time.
In the paramagnetic phase, <math>q_{EA} = 0</math>, while in the spin glass phase, <math>q_{EA} > 0</math>.
This raises the question of whether the transition at <math>T_f</math> is truly thermodynamic in nature. Indeed, in the definition of the Edwards-Anderson (EA) parameter, time seems to play a role, and the magnetic susceptibility does not diverge at the freezing temperature <math>T_f</math>.
The divergence of the magnetic susceptibility in ferromagnets is due to the fact that the magnetization <math>M = \sum_i \sigma_i</math> serves as the order parameter, distinguishing the ordered and disordered phases. However, in the spin glass model, magnetization is zero in both phases and the order parameter  is <math>q_{EA}</math>.
It can be shown that the associated susceptibility corresponds to the nonlinear susceptibility:
<center><math> \frac{M}{H} = \chi + a_3 H^2 + a_5 H^4 + \ldots </math></center> where <math>\chi</math> is the linear susceptibility, and <math>a_3, a_5, \ldots</math> are higher-order coefficients. Experiments have demonstrated that <math>a_3</math> and <math>a_5</math> exhibit singular behavior, providing experimental evidence for the existence of a thermodynamic transition at <math>T_f</math>.


== The SK model ==
== The SK model ==
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
<center> <math>
<center> <math>
  E= - \sum_{i,j} \frac{J_{ij}}{ \sqrt{N}} \sigma_i \sigma_j
  E= - \sum_{i,j} \frac{J_{ij}}{2 \sqrt{N}} \sigma_i \sigma_j
</math></center>
</math></center>
At the inverse temperature <math>
At the inverse temperature <math>
Line 60: Line 37:
</math></center>
</math></center>
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.
This model presents a thermodynamic transition.
This model presents a thermodynamic transition at <math>
\beta_c=?? </math>.


= Random energy model =
== Random energy model ==


The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida.  
The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida.  
This model neglects the correlations between the <math> M=2^N </math> configurations and assumes the <math> E_{\alpha} </math> as iid variables.
This model neglects the correlations between the <math> M=2^N </math> configurations and assumes the <math> E_{\alpha} </math> as iid variables.
* Show that the energy distribution is  
* Show that the energy distribution is  
<center><math> p(E_\alpha) =\frac{1}{\sqrt{2 \pi \sigma_M^2}}e^{-\frac{E_{\alpha}^2}{2 \sigma_M^2}}</math></center>
<center><math> p(E_\alpha) =\frac{1}{\sqrt{2 \pi \sigma^2}}e^{-\frac{E_{\alpha}^2}{2 \sigma^2}}</math></center>
and determine <math>\sigma_M^2= N= \log M/\log 2</math>
and determine <math>\sigma^2</math>






We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the <math>M=2^N</math> configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.
We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the <math>M=2^N</math> configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.  


==  Extreme value statistics  ==


Consider the  <math>M=2^N</math>  energies: <math>E_1,...,E_M</math> as iid variables, drawn from the distribution <math>p(E)</math> (Gaussian, but we remain general in this section). It is useful to  use the following notations:
* <math>P^<(E)=\int_{-\infty}^E dx p(x)  </math>, it  is the probability to find an energy smaller than ''E''.
* <math> P^>(E)=\int_E^{+\infty} dx p(x) = 1- P^<(E) </math>, it is the probability to find an energy  larger than ''E''.


== Extreme Value Statistics ==
We denote
<center><math>E_{\min}=\min(E_1,...,E_M)</math></center>
Our goal is to compute the cumulative distribution  <math>Q_M(\epsilon)\equiv\text{Prob}(E_{\min}> \epsilon)</math> for large ''M''. To achieve this we need  two key relations:
* The first relation is exact:
<center><math>Q_M(\epsilon) = \left(P^>(\epsilon)\right)^M </math> </center>
* The second relation identifies the typical value of the minimum, namely <math> a_M  </math>:
<center><math>P^<(a_M) = \frac1 M </math> </center>.


Consider the <math>M = 2^N</math> energies <math>E_1, \dots, E_M</math> as independent and identically distributed (i.i.d.) random variables drawn from a distribution <math>p(E)</math> (Gaussian in the case of the REM). It is useful to introduce the cumulative probability of finding an energy smaller than ''E'':
Let us consider  the limit,  <math>\lim_{M\to \infty} (1-\frac{k}{M})^M =\exp(-k)</math>, which allow to  re-write the first relation:
<center><math>Q_M(\epsilon) \sim \exp\left(-M  P^<(\epsilon)\right)</math> </center>
This relation holds only when <math> \epsilon \approx a_M </math> and one hase to expand around this value.
However, a direct Taylor  expansion does not ensures that probabilities remain positive. Hence, we define <math>  P^<(\epsilon)=\exp(-A(\epsilon)) </math> and remark that <math>  A(\epsilon) </math>  is a decreasing function. We propose the following Taylor expansion 
<center><math>  A(\epsilon) =a_M + A'(a_M)(\epsilon -a_M) = a_M - y_N(\epsilon -a_M) </math></center>
Depending on the distribution <math>p(E)</math> we have a different dependence on ''N'' or ''M'' of both <math>a_M, y_N </math>. It is convenient to define
<center><math>  y_N \propto N^{-\omega}  </math></center>
We will see that three different scenarios occur depending on the sign of <math>  \omega  </math>. Using this expansion we derive:
* The famous Gumbel distribution:
<center><math>Q_M(\epsilon) \sim \exp\left(-e^{ y_N (\epsilon-a_M)}\right)  </math> </center>
* the  typical fluctuations of the minimum <math> \sim 1/y_N</math>. In particular we can write:  
<center><math> \overline{ \left(E_{\min} -  \overline {E_{\min}}\right)^2 }\propto N^{2\omega}</math></center>


<center> <math>P^<(E) = \int_{-\infty}^E dx \, p(x)</math> </center>


The complementary probability of finding an energy larger than ''E'' is:
===Density of states above the minimum===
<center> <math>P^>(E) = \int_E^{+\infty} dx \, p(x) = 1 - P^<(E)</math> </center>
 
We define:
<center> <math>E_{\min} = \min(E_1, \dots, E_M)</math> </center>
 
Our goal is to compute the cumulative distribution:
<center> <math>Q_M(\epsilon) \equiv \text{Prob}(E_{\min} > \epsilon)</math> </center>
 
for large ''M''. To achieve this, we rely on three key relations:
 
* *First relation (exact)*:
    <center> <math>Q_M(\epsilon) = \left(P^>(\epsilon)\right)^M</math> </center>
 
* *Second relation (estimate)*: The typical value of the minimum energy, <math>E_{\min}^{\text{typ}}</math>, satisfies:
    <center> <math>P^<(E_{\min}^{\text{typ}}) = \frac{1}{M}</math> </center>
 
* *Third relation (approximation)*: For <math>M \to \infty</math>, we have:
    <center> <math>Q_M(\epsilon) = e^{M \log(1 - P^<(\epsilon))} \sim \exp\left(-M P^<(\epsilon)\right)</math> </center>
 
=== Gaussian Case and Beyond ===


For a Gaussian distribution, the asymptotic tail of <math>P^<(E)</math> as <math>E \to -\infty</math> is:
<center> <math>P^<(E) = \int_{-\infty}^E dx \, p(x) \sim \frac{\sigma}{\sqrt{2 \pi} |E|} e^{-\frac{E^2}{2 \sigma^2}}</math> </center>


Thus, the typical value of the minimum energy is:
For a given disorder realization, we compute <math> d(x) </math>, the number of configurations above the minimum with an energy smaller than <math> E_{\min}+x</math>. The key relation for this quantity is:
<center> <math>E_{\min}^{\text{typ}} = -\sigma \sqrt{2 \log M} + \frac{1}{2} \sqrt{\log(\log M)} + O(1)</math> </center>
<center><math>  \text{Prob}(d(x) = k) = M \binom{M-1}{k}\int dE \; p(E) [P^>(E) -  P^>(E+x)  ]^{k} P^>(E+x)^{M - k - 1}
Let us to be more general and consider tails
<center><math>P^<(E) \sim e^{-\frac{|E|^\gamma}{2 \sigma_M^2}} \; . </math></center>
 
In the spirit of the central limit theorem we are looking for a scaling form  <math>E_{\min}=a_M + b_M z </math>. The  constants <math>a_M, b_M</math> are M-dependent while <math>z</math> is a random variable of order one drawa from the M-independent distribution <math>P(z)</math>.
Shows that
* at the leading order <math>a_M= -\left[2 \sigma^2_M \log M\right]^{1/\gamma}</math>
* <math>b_M= \frac{2 \sigma^2_M}{\gamma |a_M|^{\gamma-1}} </math>
* <math>P(z) = e^{z} e^{-e^{z}}</math> which is the Gumbel distribution
 
=== Ground State Fluctuations ===
 
Depending on the distribution <math>p(E)</math>, we observe different dependencies of ''M'' for <math>a_M</math> and <math>b_M</math>. To emphasize the ''N'' dependence, we define:
<center> <math>b_M \equiv 1/y_N \propto N^{\omega}</math> </center>
 
Note that the typical fluctuations of the minimum are <math>\sim 1/y_N</math>. Specifically, we can write:
<center> <math>\overline{\left(E_{\min} - \overline{E_{\min}}\right)^2} \propto N^{2\omega}</math> </center>
 
We will see that three distinct scenarios emerge depending on the sign of <math>\omega</math>.
 
===Density of states above the minimum===
 
For a given disorder realization, we compute <math> n(x) </math>, the number of configurations above the minimum with an energy smaller than <math> E_{\min}+x</math>. The key relation for this quantity is:
<center><math>  \text{Prob}[d(x) = k] = M \binom{M-1}{k}\int_{-\infty}^\infty dE \; p(E) [P^>(E) -  P^>(E+x)  ]^{k} P^>(E+x)^{M - k - 1}
     </math></center>
     </math></center>
Taking the average, we get <math> \overline{n(x)} = \sum_k k   \,  \text{Prob}[d(x) = k] </math>. We use the following identity
Taking the average  <math> \overline{d(x)} = \sum_k k \text{Prob}(d(x) = k) </math>, we derive
<center><math>
<center><math>
\sum_{k=0}^{M-1} k \binom{M-1}{k} (A-B)^k B^{M-1-k} = (A-B)\frac{d}{d A} \sum_{k=0}^{M-1} \binom{M-1}{k}  (A-B)^k    B^{M-1-k}= (M-1)(A-B)A^{M-2}
    \overline{d(x)} = M (M-1) \int dE \; p(E) \left[P^>(E) P^>(E+x)  \right] P^>(E)^{M-2}
</math></center>
</math></center>
we arrive to the final form
In the above integral, <math> E </math> is the energy of the minimum. Hence, we can use the Taylor expansion <math> A(E) = a_M -y_N (E -a_M)</math>. In particular we can write
<center><math>
* <math>  p(E) = \frac{d}{d E} P^<(E)= -A'(E) e^{-A(E)} \sim y_N e^{y_N (E -a_M)} /M</math>
\overline{n(x)} = M (M-1) \int  dE \; p(E) \left[P^>(E) -  P^>(E+x)  \right] P^>(E)^{M-2}   
* <math> P^>(E) -  P^>(E+x)  = e^{-A(E+x)}-e^{-A(E)}\sim  e^{y_N (E -a_M)} \left(e^{y_N x}-1\right)/M </math>
</math></center>
* <math> P^>(E)^{M-2}= Q_{M-2} (E) \sim \exp\left(-e^{ y_N (E-a_M)}\right) </math>
Replace <math>E=a_M + b_M z </math> and obtain
Calling <math>u=y_N (E -a_M) </math> we obtain
<center><math>  \overline{n(x)} =  \left(e^{y_N x}-1\right) \int_{-\infty}^{\infty}  dz   e^{2 z -e^{z} } = \left(e^{y_N x}-1\right)\quad \text{with} \; y_N \sim N^{-\omega}
<center><math>  \overline{d(x)} =  \left(e^{y_N x}-1\right) \int_{-\infty}^{\infty}  du   e^{2 u -e^{u} } = \left(e^{y_N x}-1\right)\quad \text{with} \; y_N \sim N^{-\omega}
     </math></center>
     </math></center>


=== Density of States Above the Minimum ===
==The Glass phase==


For a given disorder realization, we compute <math>n(x)</math>, the number of configurations above the minimum with energy smaller than <math>E_{\min} + x</math>. The key relation for this quantity is:
In the Glass phase the measure is concentrated in few configurations which has a finite occupation probability while in the paramagnetic phase the occupation probability is <math>\sim 1/M </math>. As a consequence the entropy is extensive in the paramagnetic phase and sub-extensive in the glass phase. It is useful to evaluate the occupation probability of the ground state in the infinite system:
<center> <math> \text{Prob}[d(x) = k] = M \binom{M-1}{k} \int_{-\infty}^\infty dE \; p(E) \left[P^>(E) - P^>(E+x)\right]^k P^>(E+x)^{M - k - 1} </math> </center>


Taking the average, we get:
<center><math>   \frac{z_{\alpha_{\min}}}{\sum_{\alpha=1}^M z_\alpha}= \frac{1}{1+\sum_{\alpha\ne \alpha_{\min}} z_\alpha}\sim \frac{1}{1+\int_0^\infty dx\, e^{-\beta x} \left(e^{y_N x}-1\right) }
<center> <math> \overline{n(x)} = \sum_k k \, \text{Prob}[d(x) = k] </math> </center>
    </math></center>
 
We use the identity:
<center> <math> \sum_{k=0}^{M-1} k \binom{M-1}{k} (A-B)^k B^{M-1-k} = (A-B) \frac{d}{dA} \sum_{k=0}^{M-1} \binom{M-1}{k} (A-B)^k B^{M-1-k} = (M-1)(A-B)A^{M-2} </math> </center>
 
This leads to the final form:
<center> <math> \overline{n(x)} = M (M-1) \int dE \; p(E) \left[P^>(E) - P^>(E+x)\right] P^>(E)^{M-2} </math> </center>


Replacing <math>E = a_M + b_M z</math>, we obtain:
* In the high temperature phase, for <math> \beta<y_N </math>, the occupation probability is close to zero, meaning that the ground state is not deep enough to make the system glassy
<center> <math> \overline{n(x)} = \left(e^{y_N x} - 1\right) \int_{-\infty}^\infty dz \, e^{2z - e^z} = \left(e^{y_N x} - 1\right) \quad \text{with} \; y_N \sim N^{-\omega} </math> </center>
* In the low temperature phase, for <math> \beta>y_N </math>, the above integral is finite. Hence, setting <math> \beta=1/T, T_f=1/y_N</math> you can find
<center><math>   \frac{z_{\alpha_{\min}}}{\sum_{\alpha=1}^M z_\alpha}= \frac{1}{1+ \frac{T^2}{T_f-T} } </math></center>
This means that below the freezing temperature, the ground state is occupied with a finite probability as in Bose-Einstein Condensation.


=Take home message=
Let us recall <math> y_N \sim N^{-\omega}</math>, so that three situations can occur
Let us recall <math> y_N \sim N^{-\omega}</math>, so that three situations can occur
*  For <math> \omega<0</math>, there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
*  For <math> \omega<0</math>, there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
* For  <math> \omega>0</math> there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent  <math> \omega</math> can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
* For  <math> \omega>0</math> there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent  <math> \omega</math> can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
* For  <math> \omega=0</math> there is for sure  a freezing transition. For the Random Energy Model defined above <math>T_f=1/\sqrt{2 \log 2}</math> One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (yet not extensive) number of low energy exctitations.
* For  <math> \omega=0</math> there is for sure  a freezing transition. One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (but not extensive) number of low energy exctitations.


<!-- == Exercise L1-A: More on extreme values ==
== Exercise L1-A: the Gaussian case ==
For a large set of iid random variables there are only three scaling form for the distribution of the minimum. In this lecture we studied the Gumbel case for fast decaying functions. The other two possibilities are
Specify these results to the Guassian case and find
* <strong>Frechet case: </strong> for a power law tail <math> p(E) \sim c/|E|^{\gamma+1} </math> with <math> \gamma>0 </math>
<math>P^<(E)=\int_{-\infty}^E dx p(x) \sim \frac{\sigma}{\sqrt{2 \pi}|E|}e^{-\frac{E^2}{2 \sigma^2}} \; </math> for <math>x \to -\infty</math>
* <strong>Weidbul case: </strong> for a bounded distribution with  <math> p(E) \sim c(E-E_0)^\gamma \; \text{when } E\to E_0^+ </math>
* the typical value of the minimum
Compute <math>a_M, b_M</math> in both cases as well as the limiting distribution. -->
%<center><math>a_M = \sigma \sqrt{2 \log M}-\frac{1}{2}\sqrt{\log(\log M)} +O(1) </math> </center>
* The expression <math>   A(\epsilon) =\frac{\epsilon^2}{2\sigma^2} -\frac{\sqrt{2 \pi}}{\sigma} \log|\epsilon|+\ldots </math>
*The expression of the Gumbel distribution for the Gaussian case
<center><math>Q_M(\epsilon) \sim \exp\left(-e^{- \frac{\sqrt{2 \log M}}{\sigma} (\epsilon-a_M)}\right)  </math> </center>


=References=
=Bibliography=
* ''Spin glass i-vii'', P.W. Anderson,  Physics Today, 1988
* ''Theory of spin glasses'', S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965, 1975
*  ''Spin glasses: Experimental signatures and salient outcome'', E. Vincent and V. Dupuis, Frustrated Materials and Ferroic Glasses  31 (2018).
* ''Theory of spin glasses'', S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965 (1975).
* ''Non-linear susceptibility in spin glasses and disordered systems'', H. Bouchiat, Journal of Physics: Condensed Matter, 9, 1811 (1997).
* ''Solvable Model of a Spin-Glass'', D. Sherrington and S. Kirkpatrick, Physical Review Letters, 35, 1792 (1975).
*  ''Random-Energy Model: An Exactly Solvable Model of Disordered Systems'', B.Derrida,Physical Review B, 24, 2613 (1980).
*  ''Extreme value statistics of correlated random variables: a pedagogical review'', S. N. Majumdar, A. Pal, and G. Schehr, Physics Reports 840, 1-32, (2020).

Revision as of 19:28, 16 January 2025

Spin glass Transition

Experiments

Parlare dei campioni di rame dopati con il magnesio, marino o no: trovare due figure una di suscettivita e una di calore specifico, prova della transizione termodinamica.

Edwards Anderson model

We consider for simplicity the Ising version of this model.

Ising spins takes two values and live on a lattice of sites . The enregy is written as a sum between the nearest neighbours <i,j>:

Edwards and Anderson proposed to study this model for couplings that are i.i.d. random variables with zero mean. We set the coupling distribution and we indicate the average over the couplings, called disorder average, with an overline:

It is crucial to assume , otherwise the model displays ferro/antiferro order. We will discuss two distributions:

  • Gaussian couplings:
  • Coin toss couplings, , selected with probability .

Edwards Anderson order parameter

The SK model

Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:

At the inverse temperature , the partion function of the model is

Here is the energy associated to the configuration . This model presents a thermodynamic transition at .

Random energy model

The solution of the SK is difficult. To make progress we first study the radnom energy model (REM) introduced by B. Derrida. This model neglects the correlations between the configurations and assumes the as iid variables.

  • Show that the energy distribution is

and determine


We provide different solutions of the Random Energy Model (REM). The first one focus on the statistics of the smallest energies among the ones associated to the configurations. For this, we need to become familiar with the main results of extreme value statistic of iid variables.

Extreme value statistics

Consider the energies: as iid variables, drawn from the distribution (Gaussian, but we remain general in this section). It is useful to use the following notations:

  • , it is the probability to find an energy smaller than E.
  • , it is the probability to find an energy larger than E.

We denote

Our goal is to compute the cumulative distribution for large M. To achieve this we need two key relations:

  • The first relation is exact:
  • The second relation identifies the typical value of the minimum, namely :

.

Let us consider the limit, , which allow to re-write the first relation:

This relation holds only when and one hase to expand around this value. However, a direct Taylor expansion does not ensures that probabilities remain positive. Hence, we define and remark that is a decreasing function. We propose the following Taylor expansion

Depending on the distribution we have a different dependence on N or M of both . It is convenient to define

We will see that three different scenarios occur depending on the sign of . Using this expansion we derive:

  • The famous Gumbel distribution:
  • the typical fluctuations of the minimum . In particular we can write:


Density of states above the minimum

For a given disorder realization, we compute , the number of configurations above the minimum with an energy smaller than . The key relation for this quantity is:

Taking the average , we derive

In the above integral, is the energy of the minimum. Hence, we can use the Taylor expansion . In particular we can write

Calling we obtain

The Glass phase

In the Glass phase the measure is concentrated in few configurations which has a finite occupation probability while in the paramagnetic phase the occupation probability is . As a consequence the entropy is extensive in the paramagnetic phase and sub-extensive in the glass phase. It is useful to evaluate the occupation probability of the ground state in the infinite system:

  • In the high temperature phase, for , the occupation probability is close to zero, meaning that the ground state is not deep enough to make the system glassy
  • In the low temperature phase, for , the above integral is finite. Hence, setting you can find

This means that below the freezing temperature, the ground state is occupied with a finite probability as in Bose-Einstein Condensation.

Let us recall , so that three situations can occur

  • For , there is no freezing transition as there are too many states just above the minimum. This is the situation of many low-dimensional systems such as the Edwards Anderson model is two dimensions.
  • For there are two important features: (i) there is only the glass phase, (ii) the system condensate only in the ground state because the excited states have very high energy. We will see that in real systems (i) is not always the case and that the exponent can change with temperature. This situation can be realistic (there is a very deep groud sate), but it is not revolutionary as the following one.
  • For there is for sure a freezing transition. One important feature of this transition that we will see in the next tutorial is that the condensation does not occur only in the ground state but in a large (but not extensive) number of low energy exctitations.

Exercise L1-A: the Gaussian case

Specify these results to the Guassian case and find for

  • the typical value of the minimum

%

  • The expression
  • The expression of the Gumbel distribution for the Gaussian case

Bibliography

  • Theory of spin glasses, S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965, 1975