L1 ICTS: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
 
(15 intermediate revisions by the same user not shown)
Line 22: Line 22:
Ising spins take two values, <math>\sigma_i = \pm 1</math>, and are located on a lattice with <math>N</math> sites, indexed by <math>i = 1, 2, \ldots, N</math>.
Ising spins take two values, <math>\sigma_i = \pm 1</math>, and are located on a lattice with <math>N</math> sites, indexed by <math>i = 1, 2, \ldots, N</math>.
The energy of the system is expressed as a sum over nearest neighbors <math>\langle i, j \rangle</math>:
The energy of the system is expressed as a sum over nearest neighbors <math>\langle i, j \rangle</math>:
<center><math> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j. </math></center>
<math display="block"> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j. </math>


Edwards and Anderson proposed studying this model with couplings <math>J_{ij}</math> that are independent and identically distributed (i.i.d.) random variables with a zero mean.
Edwards and Anderson proposed studying this model with couplings <math>J_{ij}</math> that are independent and identically distributed (i.i.d.) random variables with a zero mean.
The coupling distribution is denoted by <math>\pi(J)</math>, and the average over the couplings, referred to as the disorder average, is indicated by an overline:
The coupling distribution is denoted by <math>\pi(J)</math>, and the average over the couplings, referred to as the disorder average, is indicated by an overline:
<center><math> \overline{J} \equiv \int dJ \, J \, \pi(J) = 0. </math></center>
<math display="block"> \overline{J} \equiv \int dJ \, J \, \pi(J) = 0. </math>


In the following we will consider Gaussian couplings: <math>\pi(J) = \exp\left(-J^2 / 2\right) / \sqrt{2 \pi}</math>.
In the following we will consider Gaussian couplings: <math>\pi(J) = \exp\left(-J^2 / 2\right) / \sqrt{2 \pi}</math>.
Line 42: Line 42:
* '''Glassy behavior.'''
* '''Glassy behavior.'''
Does the system undergo a spin-glass transition even in the absence of geometrical order?
Does the system undergo a spin-glass transition even in the absence of geometrical order?


==Self-averaging==
==Self-averaging==
Line 49: Line 48:


In the presence of disorder, the energy associated with a given configuration becomes a random quantity. For instance, in the Edwards-Anderson model:
In the presence of disorder, the energy associated with a given configuration becomes a random quantity. For instance, in the Edwards-Anderson model:
<center><math> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j, </math></center>
<math display="block"> E = - \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j, </math>


where the sum runs over nearest neighbors <math>\langle i, j \rangle</math>, and the couplings <math>J_{ij}</math> are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance.
where the sum runs over nearest neighbors <math>\langle i, j \rangle</math>, and the couplings <math>J_{ij}</math> are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance.
Line 58: Line 57:
The energy of this configuration is given by the sum of all the couplings between neighboring spins:
The energy of this configuration is given by the sum of all the couplings between neighboring spins:


<center><math>
<math display="block">
E[\sigma_1=1,\sigma_2=1,\ldots] = - \sum_{\langle i, j \rangle} J_{ij}.
E[\sigma_1=1,\sigma_2=1,\ldots] = - \sum_{\langle i, j \rangle} J_{ij}.
</math></center>
</math>


Since the couplings are random, the energy associated with this particular configuration
Since the couplings are random, the energy associated with this particular configuration
Line 75: Line 74:


From a more mathematical point of view, it means that the free energy  <math> F_N(\beta)=N f_N(\beta)</math> and its derivatives (magnetization, specific heat, susceptibility, etc.), in the limit <math> N \to \infty </math>, these random quantities concentrates around a well defined value. These observables are called self-averaging. This means that,
From a more mathematical point of view, it means that the free energy  <math> F_N(\beta)=N f_N(\beta)</math> and its derivatives (magnetization, specific heat, susceptibility, etc.), in the limit <math> N \to \infty </math>, these random quantities concentrates around a well defined value. These observables are called self-averaging. This means that,
<center>
<math display="block">
<math>
\lim_{N \to \infty} f_N (\beta)= \lim_{N \to \infty}  f_N^{\text{typ}}(\beta) =\lim_{N \to \infty}  \overline{f_N(\beta)} =f_\infty(\beta)
\lim_{N \to \infty} f_N (\beta)= \lim_{N \to \infty}  f_N^{\text{typ}}(\beta) =\lim_{N \to \infty}  \overline{f_N(\beta)} =f_\infty(\beta)
</math>
</math>
</center>
Hence <math> f_N(\beta) </math> becomes effectively deterministic and its sample-to sample fluctuations vanish in relative terms:
Hence <math> f_N(\beta) </math> becomes effectively deterministic and its sample-to sample fluctuations vanish in relative terms:
<center>
<math display="block">
<math>
\lim_{N \to \infty} \frac{\overline{f_N^2(\beta)}}{(\overline{f_N(\beta)})^2}=1.
\lim_{N \to \infty} \frac{\overline{f_N^2(\beta)}}{(\overline{f_N(\beta)})^2}=1.
</math>
</math>
</center>


== Glass Transition: the Edwards Anderson order parameter==
== Glass Transition: the Edwards Anderson order parameter==
Line 96: Line 91:
The order parameter for this phase is:
The order parameter for this phase is:


<center><math>
<math display="block">
q_{EA} = \lim_{t \to \infty} \lim_{N \to \infty} \frac{1}{N} \sum_{i} \sigma_i(0)\,\sigma_i(t).
q_{EA} = \lim_{t \to \infty} \lim_{N \to \infty} \frac{1}{N} \sum_{i} \sigma_i(0)\,\sigma_i(t).
</math></center>
</math>


The quantity <math>q_{EA}</math> measures the overlap of the spin configuration with itself after a long time.
The quantity <math>q_{EA}</math> measures the overlap of the spin configuration with itself after a long time.
Line 113: Line 108:


It can be shown that the susceptibility associated with <math>q_{EA}</math> corresponds to the nonlinear susceptibility:
It can be shown that the susceptibility associated with <math>q_{EA}</math> corresponds to the nonlinear susceptibility:
 
<math display="block">
<center><math>
\frac{M}{H} = \chi + a_3 H^2 + a_5 H^4 + \ldots
\frac{M}{H} = \chi + a_3 H^2 + a_5 H^4 + \ldots
</math></center>
</math>


Here <math>\chi</math> is the linear susceptibility, while <math>a_3, a_5, \ldots</math> are higher-order coefficients.
Here <math>\chi</math> is the linear susceptibility, while <math>a_3, a_5, \ldots</math> are higher-order coefficients.
Line 124: Line 118:
=== Sherrington and Kirkpatrik (SK) Model===
=== Sherrington and Kirkpatrik (SK) Model===
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings:
<center> <math>
<math display="block">
  E= - \sum_{i,j} \frac{J_{ij}}{ \sqrt{N}} \sigma_i \sigma_j
  E= - \sum_{i,j} \frac{J_{ij}}{ \sqrt{N}} \sigma_i \sigma_j
</math></center>
</math>
At the inverse temperature <math>
At the inverse temperature <math>
  \beta </math>, the partion function of the model is
  \beta </math>, the partion function of the model is
<center> <math>
<math display="block">
  Z=  \sum_{\alpha=1}^{2^N} z_{\alpha}, \quad \text{with}\; z_{\alpha}= e^{-\beta E_\alpha}  
  Z=  \sum_{\alpha=1}^{2^N} z_{\alpha}, \quad \text{with}\; z_{\alpha}= e^{-\beta E_\alpha}  
</math></center>
</math>
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.
Here <math> E_\alpha </math> is the energy associated to the configuration  <math> \alpha </math>.


Line 138: Line 132:
The solution of the Sherrington-Kirkpatrick (SK) model is challenging. To make progress, we first study the Random Energy Model (REM), introduced by B. Derrida. This model simplifies the problem by neglecting correlations between the <math>M=2^N</math> configurations and assuming that the energies <math>E_{\alpha}</math> are independent and identically distributed (i.i.d.) random variables. Here, "independent" means that the energy of one configuration does not influence the energy of another, e.g., a configuration identical to the previous one except for a spin flip. "Identically distributed" indicates that all configurations follow the same probability distribution.
The solution of the Sherrington-Kirkpatrick (SK) model is challenging. To make progress, we first study the Random Energy Model (REM), introduced by B. Derrida. This model simplifies the problem by neglecting correlations between the <math>M=2^N</math> configurations and assuming that the energies <math>E_{\alpha}</math> are independent and identically distributed (i.i.d.) random variables. Here, "independent" means that the energy of one configuration does not influence the energy of another, e.g., a configuration identical to the previous one except for a spin flip. "Identically distributed" indicates that all configurations follow the same probability distribution.


    '''Energy Distribution:''' Show that the energy distribution is given by: <center><math> p(E_\alpha) = \frac{1}{\sqrt{2 \pi \sigma_M^2}} \exp\left(-\frac{E_{\alpha}^2}{2 \sigma_M^2}\right) </math></center> and determine that: <center><math>\sigma_M^2 = N = \frac{\log M}{\log 2}</math></center>.
'''Energy Distribution:''' Show that the energy distribution is given by: <math display="block"> p(E_\alpha) = \frac{1}{\sqrt{2 \pi \sigma_M^2}} \exp\left(-\frac{E_{\alpha}^2}{2 \sigma_M^2}\right) </math>and determine that: <math display="block">\sigma_M^2 = N = \frac{\log M}{\log 2}</math>.


In the following, we present the original solution of the model. Here, we characterize the glassy phase by analyzing the statistical properties of the smallest energy values among the <math>M=2^N</math> configurations. To address this, it is necessary to make a brief detour into the theory of extreme value statistics for i.i.d. random variables.
In the following, we present the original solution of the model. Here, we characterize the glassy phase by analyzing the statistical properties of the smallest energy values among the <math>M=2^N</math> configurations. To address this, it is necessary to make a brief detour into the theory of extreme value statistics for i.i.d. random variables.
Line 145: Line 139:


Consider the REM spectrum of  <math>M</math> energies <math>E_1, \dots, E_M</math> drawn from a distribution <math>p(E)</math>. It is useful to introduce the cumulative probability of finding an energy smaller than ''E''
Consider the REM spectrum of  <math>M</math> energies <math>E_1, \dots, E_M</math> drawn from a distribution <math>p(E)</math>. It is useful to introduce the cumulative probability of finding an energy smaller than ''E''
<center> <math>P(E) = \int_{-\infty}^E dx \, p(x)</math> </center>
<math display="block">P(E) = \int_{-\infty}^E dx \, p(x)</math>  
We also define:
We also define:
<center> <math>E_{\min} = \min(E_1, \dots, E_M), \quad Q_M(E) \equiv \text{Prob}(E_{\min} > E) </math> </center>
<math display="block">E_{\min} = \min(E_1, \dots, E_M), \quad Q_M(E) \equiv \text{Prob}(E_{\min} > E) </math>  


The statistical properties of <math>E_{\min}</math> are derived using two key relations:
The statistical properties of <math>E_{\min}</math> are derived using two key relations:
*    '''First relation''':   
*    '''First relation''':   
<center> <math>P(E_{\min}^{\text{typ}}) = 1/M</math> </center>  
<math display="block">P(E_{\min}^{\text{typ}}) = 1/M</math>  
This is an estimation of the typical value of the minimum. It is a crucial relation that will be used frequently.
This is an estimation of the typical value of the minimum. It is a crucial relation that will be used frequently.
*  '''Second relation''':
*  '''Second relation''':
    <center> <math>Q_M(E) = (1-P(E))^M= e^{M \log(1 - P(E))} \sim \exp\left(-M P(E)\right) </math> </center>
<math display="block">Q_M(E) = (1-P(E))^M= e^{M \log(1 - P(E))} \sim \exp\left(-M P(E)\right) </math>
The first two steps are exact, but the resulting distribution depends on <math>M</math> and the precise form of <math>p(E)</math>. In contrast, the last step is an approximation, valid when <math>M \times P(E)=O(1)</math> and thus, for large <math>M</math>, when  
The first two steps are exact, but the resulting distribution depends on <math>M</math> and the precise form of <math>p(E)</math>. In contrast, the last step is an approximation, valid when <math>M \times P(E)=O(1)</math> and thus, for large <math>M</math>, when  
<math> P(E)\ll 1 </math>. This second relation  allows to express the random variable <math>E_{\min}</math> in a scaling form: <math>E_{\min} = a_M + b_M z</math>. The two parameters <math>a_M</math> and <math>b_M</math> are deterministic and <math>M</math>-dependent, while <math>z</math> is a random variable that is independent of <math>M</math>.
<math> P(E)\ll 1 </math>. This second relation  allows to express the random variable <math>E_{\min}</math> in a scaling form: <math>E_{\min} = a_M + b_M z</math>. The two parameters <math>a_M</math> and <math>b_M</math> are deterministic and <math>M</math>-dependent, while <math>z</math> is a random variable that is independent of <math>M</math>.
Line 160: Line 154:
== Extreme value statistics for Gaussian variables ==
== Extreme value statistics for Gaussian variables ==


We consider <math>M</math> independent random variables drawn from a Gaussian distribution with zero mean and variance <math>\sigma^2</math>. Since the distribution is unbounded from below, the statistics of the minimum is controlled by the asymptotic behavior of the left tail. The technical analysis of the Gaussian tail is carried out in
We consider <math>M</math> independent random variables drawn from a Gaussian distribution with zero mean and variance <math>\sigma^2</math>. Since the distribution is unbounded from below, the statistics of the minimum is controlled by the asymptotic behavior of the left tail. The technical analysis of the Gaussian tail is carried out in '''Exercise 1'''. Here we summarize the key results and emphasize their conceptual implications.
'''Exercise 1'''. Here we summarize the key results and emphasize their conceptual implications.




Line 167: Line 160:
As shown in Exercise 1, the cumulative distribution can be written in the
As shown in Exercise 1, the cumulative distribution can be written in the
left tail (<math>E \to -\infty</math>) as
left tail (<math>E \to -\infty</math>) as
 
<math display="block">
<center>
<math>
P(E)=\exp(A(E)),
P(E)=\exp(A(E)),
\qquad
\qquad
Line 175: Line 166:
- \log\!\left(\frac{\sqrt{2\pi}\,|E|}{\sigma}\right)+\ldots
- \log\!\left(\frac{\sqrt{2\pi}\,|E|}{\sigma}\right)+\ldots
</math>
</math>
</center>
 


with
with


<center>
<math display="block">
<math>
A'(E)= -\frac{E}{\sigma^2}+\ldots
A'(E)= -\frac{E}{\sigma^2}+\ldots
</math>
</math>
</center>
 




The typical minimum <math>E_{\min}^{\mathrm{typ}}</math> is defined by the relation <math> P(E_{\min}^{\mathrm{typ}}) = 1/ M</math>, namely
The typical minimum <math>E_{\min}^{\mathrm{typ}}</math> is defined by the relation <math> P(E_{\min}^{\mathrm{typ}}) = 1/ M</math>, namely
 
<math display="block">
<center>
<math>
A(E_{\min}^{\mathrm{typ}}) = -\log M,
A(E_{\min}^{\mathrm{typ}}) = -\log M,
</math>
</math>
</center>
 




Keeping only the leading contribution
Keeping only the leading contribution
<math>A(E)\simeq -E^2/(2\sigma^2)</math>, and neglecting the logarithmic term, one immediately obtains the leading scaling
<math>A(E)\simeq -E^2/(2\sigma^2)</math>, and neglecting the logarithmic term, one immediately obtains the leading scaling
 
<math display="block">
<center>
<math>
E_{\min}^{\mathrm{typ}} \simeq -\sigma\sqrt{2\log M}.
E_{\min}^{\mathrm{typ}} \simeq -\sigma\sqrt{2\log M}.
</math>
</math>
</center>


A more careful analysis of the Gaussian tail, carried out in '''Exercise 1''', allows one to extract the first subleading (logarithmic) correction, yielding
A more careful analysis of the Gaussian tail, carried out in '''Exercise 1''', allows one to extract the first subleading (logarithmic) correction, yielding
 
<math display="block">
<center>
<math>
E_{\min}^{\mathrm{typ}}
E_{\min}^{\mathrm{typ}}
= -\sigma \sqrt{2\log M}
= -\sigma \sqrt{2\log M}
Line 214: Line 197:
\right).
\right).
</math>
</math>
</center>


== Gumbel scaling of the minimum ==
=== Gumbel scaling of the minimum ===


The cumulative distribution of the minimum should be written using the second relation
The cumulative distribution of the minimum should be written using the second relation
 
<math display="block">
<center>
<math>
Q_M(E)
Q_M(E)
\sim \exp\!\bigl(-M P(E)\bigr)
\sim \exp\!\bigl(-M P(E)\bigr)
= \exp\!\bigl(-M e^{A(E)}\bigr).
= \exp\!\bigl(-M e^{A(E)}\bigr).
</math>
</math>
</center>
 


For Gaussian variables, a natural choice for the centering constant is
For Gaussian variables, a natural choice for the centering constant is
 
<math display="block">
<center>
<math>
a_M \equiv E_{\min}^{\mathrm{typ}},
a_M \equiv E_{\min}^{\mathrm{typ}},
\qquad
\qquad
A(a_M)=-\log M.
A(a_M)=-\log M.
</math>
</math>
</center>
 


Expanding <math>A(E)</math> to first order around <math>a_M</math>,
Expanding <math>A(E)</math> to first order around <math>a_M</math>,
 
<math display="block">
<center>
<math>
A(E)\simeq A(a_M)+A'(a_M)(E-a_M),
A(E)\simeq A(a_M)+A'(a_M)(E-a_M),
</math>
</math>
</center>
one finds
one finds
 
<math display="block">
<center>
<math>
Q_M(E)\sim
Q_M(E)\sim
\exp\!\left[-\exp\!\bigl(A'(a_M)(E-a_M)\bigr)\right].
\exp\!\left[-\exp\!\bigl(A'(a_M)(E-a_M)\bigr)\right].
</math>
</math>
</center>


This suggests introducing the scale
This suggests introducing the scale
 
<math display="block">
<center>
<math>
b_M=\frac{1}{A'(a_M)}
b_M=\frac{1}{A'(a_M)}
= \frac{\sigma}{\sqrt{2\log M}},
= \frac{\sigma}{\sqrt{2\log M}},
</math>
</math>
</center>
and the rescaled variable
and the rescaled variable
 
<math display="block">
<center>
<math>
z=\frac{E_{\min}-a_M}{b_M}.
z=\frac{E_{\min}-a_M}{b_M}.
</math>
</math>
</center>
In the limit of large <math>M</math>, the distribution of <math>z</math> becomes
In the limit of large <math>M</math>, the distribution of <math>z</math> becomes
<math>M</math>-independent and converges to the Gumbel law
<math>M</math>-independent and converges to the Gumbel law
 
<math display="block">
<center>
<math>
\pi(z)=\exp(z)\,\exp(-e^{z}).
\pi(z)=\exp(z)\,\exp(-e^{z}).
</math>
</math>
</center>
 


=== Universality ===
=== Universality ===


The limiting distribution of properly centered and rescaled minima depends on
The limiting distribution of properly centered and rescaled minima depends on the tail behavior of the parent distribution. There are three classical universality classes:
the tail behavior of the parent distribution. There are three classical
universality classes:


* '''Gumbel class''': distributions unbounded from below whose left tail decays
* '''Gumbel class''': distributions unbounded from below whose left tail decays faster than any power (e.g.\ Gaussian or exponential). This is the case treated here, for which the choice <math>a_M=E_{\min}^{\mathrm{typ}}</math> and <math>b_M=1/A'(a_M)</math> leads to Gumbel scaling.
faster than any power (e.g.\ Gaussian or exponential). This is the case treated
here, for which the choice <math>a_M=E_{\min}^{\mathrm{typ}}</math> and
<math>b_M=1/A'(a_M)</math> leads to Gumbel scaling.


* '''Weibull class''': distributions with a finite lower bound. The minimum is
* '''Weibull class''': distributions with a finite lower bound. The minimum is controlled by the behavior near the edge of the support, and different choices of <math>a_M</math> and <math>b_M</math> are required to obtain a universal scaling form. This case is studied in '''Exercise 2'''.
controlled by the behavior near the edge of the support, and different choices
of <math>a_M</math> and <math>b_M</math> are required to obtain a universal
scaling form. This case is studied in '''Exercise 2'''.


* '''Fr\'echet class''': distributions with heavy power-law left tails. In this
* '''Frechet class''': distributions with heavy power-law left tails. In this case the minimum is controlled by rare events in the tail, leading to a different scaling behavior. In this course we will mainly focus on the Gumbel and Weibull classes.
case the minimum is controlled by rare events in the tail, leading to a
different scaling behavior.
 
In this course we will mainly focus on the Gumbel and Weibull classes, which are
the most relevant for the models discussed.


= Back to REM =
= Back to REM =


In the REM, the variance of the energies scales with the system size as
In the REM, the variance of the energies scales with the system size as
<center><math>\sigma_M^2 = \frac{\log M}{\log 2} = N.</math></center>
<math display="block">\sigma_M^2 = \frac{\log M}{\log 2} = N.</math>
As a consequence, the minimum energy takes the form
As a consequence, the minimum energy takes the form
 
<math display="block">
<center><math>
 
E_{\min} = a_M + b_M z
E_{\min} = a_M + b_M z
= - \sqrt{2 \log 2}\, N
= - \sqrt{2 \log 2}\, N
+ \frac{1}{2}\, \frac{\log (4 \pi N \log 2)}{\sqrt{2 \log 2}}
+ \frac{1}{2}\, \frac{\log (4 \pi N \log 2)}{\sqrt{2 \log 2}}
+ \frac{z}{\sqrt{2 \log 2}},
+ \frac{z}{\sqrt{2 \log 2}},
</math></center>
</math>
 
where <math>z</math> is a Gumbel-distributed random variable.
where <math>z</math> is a Gumbel-distributed random variable.


Line 324: Line 269:
Its leading contribution is deterministic and extensive, <math>F_N(\beta=\infty)\sim -\sqrt{2 \log 2}\, N</math>.
Its leading contribution is deterministic and extensive, <math>F_N(\beta=\infty)\sim -\sqrt{2 \log 2}\, N</math>.


* Sample-to-sample fluctuations are  ''N''-independent, with a standard deviation <math>\sigma = \sqrt{2 \log 2}</math>.
* Sample-to-sample fluctuations are  ''N''-independent, with a standard deviation <math>b_M = 1/\sqrt{2 \log 2}</math>.


== Phase Transition in the Random Energy Model ==
== Phase Transition in the Random Energy Model ==
Line 346: Line 291:
the average number of states within an energy window ''x'' above the ground
the average number of states within an energy window ''x'' above the ground
state is
state is
<center>
<math display="block">
<math>
\overline{n(x)} = e^{x/b_M}-1,
\overline{n(x)} = e^{x/b_M}-1,
</math>
</math>
</center>
where <math>b_M</math> is the scaling parameter of the minimum.
where <math>b_M</math> is the scaling parameter of the minimum.


Line 356: Line 299:
<math>z_\alpha = e^{-\beta E_\alpha}</math> the Boltzmann weights. We compare the total
<math>z_\alpha = e^{-\beta E_\alpha}</math> the Boltzmann weights. We compare the total
weight of all excited states to that of the ground state:
weight of all excited states to that of the ground state:
<center>
<math display="block">
<math>
\frac{\sum_\alpha z_\alpha}{z_{\alpha_{\min}}}
\frac{\sum_\alpha z_\alpha}{z_{\alpha_{\min}}}
= 1 + \sum_{\alpha\neq\alpha_{\min}} e^{-\beta(E_\alpha-E_{\min})}.
= 1 + \sum_{\alpha\neq\alpha_{\min}} e^{-\beta(E_\alpha-E_{\min})}.
</math>
</math>
</center>
Replacing the discrete sum by an integral over energy differences <math>x=E-E_{\min}</math>,
Replacing the discrete sum by an integral over energy differences <math>x=E-E_{\min}</math>,
and using the result of Exercise 3, we obtain
and using the result of Exercise 3, we obtain
<center>
<math display="block">
<math>
\sum_{\alpha\neq\alpha_{\min}} e^{-\beta(E_\alpha-E_{\min})}
\sum_{\alpha\neq\alpha_{\min}} e^{-\beta(E_\alpha-E_{\min})}
\;\sim\;
\;\sim\;
Line 373: Line 312:
\int_0^\infty dx\,\frac{1}{b_M}e^{x/b_M}e^{-\beta x}.
\int_0^\infty dx\,\frac{1}{b_M}e^{x/b_M}e^{-\beta x}.
</math>
</math>
</center>
The integral converges if and only if
The integral converges if and only if
<center>
<math display="block">
<math>
\beta > \beta_f \equiv \frac{1}{b_M}.
\beta > \beta_f \equiv \frac{1}{b_M}.
</math>
</math>
</center>
 


== Freezing transition ==
== Freezing transition ==
We thus identify the freezing temperature as
We thus identify the freezing temperature as
<center>
<math display="block">
<math>
T_f = \frac{1}{\beta_f} = b_M.
T_f = \frac{1}{\beta_f} = b_M.
</math>
</math>
</center>
For <math>T>T_f</math>, the contribution of excited states dominates and the Gibbs measure
For <math>T>T_f</math>, the contribution of excited states dominates and the Gibbs measure
is spread over an exponential number of configurations.
is spread over an exponential number of configurations.
Line 396: Line 328:
lowest-energy states contribute to the partition function: the system is frozen
lowest-energy states contribute to the partition function: the system is frozen
into a glassy phase.
into a glassy phase.


=References=
=References=

Latest revision as of 16:19, 1 March 2026

Goal: Spin glass transition. From the experiments with the anomaly on the magnetic susceptibility to order parameter of the transition. We will discuss the arguments linked to extreme value statistics


Spin glass: Experiments and models

Spin glass behavior was first observed in experiments with non-magnetic metals (such as Cu, Fe, Au, etc.) doped with a small percentage of magnetic impurities, typically Mn. At low doping levels, the magnetic moments of Mn atoms interact via the Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction. This interaction has a random sign due to the random spatial distribution of Mn atoms within the non-magnetic metal. A freezing temperature, Tf, separates the high-temperature paramagnetic phase from the low-temperature spin glass phase:

  • Above Tf: The magnetic susceptibility follows the standard Curie law, χ(T)1/T.
  • Below Tf: Strong metastability emerges, leading to differences between the field-cooled (FC) and zero-field-cooled (ZFC) protocols:

(i) In the ZFC protocol, the susceptibility decreases with decreasing temperature, T.

(ii)In the FC protocol, the susceptibility freezes at Tf, remaining constant at χFC(T<Tf)=χ(Tf).

Understanding whether these data reveal a true thermodynamic transition and determining the nature of this new "glassy" phase remains an open challenge to this day. However, in the early 1980s, spin glass models were successfully solved within the mean-field approximation. In this limit, it is possible to determine the phase diagram and demonstrate the existence of a glassy phase where the entropy vanishes at a finite temperature. Furthermore, a condensation of the Gibbs measure onto a few configurations is observed.

Edwards Anderson model

The first significant theoretical attempt to describe spin glasses is the Edwards-Anderson model. For simplicity, we will consider the Ising version of this model.

Ising spins take two values, σi=±1, and are located on a lattice with N sites, indexed by i=1,2,,N. The energy of the system is expressed as a sum over nearest neighbors i,j: E=i,jJijσiσj.

Edwards and Anderson proposed studying this model with couplings Jij that are independent and identically distributed (i.i.d.) random variables with a zero mean. The coupling distribution is denoted by π(J), and the average over the couplings, referred to as the disorder average, is indicated by an overline: JdJJπ(J)=0.

In the following we will consider Gaussian couplings: π(J)=exp(J2/2)/2π.

Despite its simple definition, the Edwards–Anderson model is a very hard problem. No analytical solution is known. Numerical simulations are also difficult and limited to small system sizes. This is due to frustration and to the resulting complex energy landscape.

Nevertheless, this model already allows us to discuss two key features of disordered systems:

  • Self-averaging.

Do macroscopic observables become independent of the disorder realization in the thermodynamic limit?

  • Glassy behavior.

Does the system undergo a spin-glass transition even in the absence of geometrical order?

Self-averaging

Random energy landascape

In a system with N degrees of freedom, the number of configurations grows exponentially with N. For simplicity, consider Ising spins that take two values, σi=±1, located on a lattice of size L in d dimensions. In this case, N=Ld and the number of configurations is M=2N=eNlog2.

In the presence of disorder, the energy associated with a given configuration becomes a random quantity. For instance, in the Edwards-Anderson model: E=i,jJijσiσj,

where the sum runs over nearest neighbors i,j, and the couplings Jij are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance.

The energy of a given configuration is a random quantity because each system corresponds to a different realization of the disorder. In an experiment, this means that each of us has a different physical sample; in a numerical simulation, it means that each of us has generated a different set of couplings Jij.

To illustrate this, consider a single configuration, for example the one where all spins are up. The energy of this configuration is given by the sum of all the couplings between neighboring spins:

E[σ1=1,σ2=1,]=i,jJij.

Since the couplings are random, the energy associated with this particular configuration is itself a Gaussian random variable, with zero mean and a variance proportional to the number of terms in the sum, that is, of order N.

The same reasoning applies to each of the M=2N configurations. As a result, in a disordered system the entire energy landscape is random and sample-dependent.

Deterministic observables

A crucial question is whether the macroscopic properties measured on a given sample are themselves random or not. Our everyday experience suggests that they are not: materials like glass, ceramics, or bronze have well-defined, reproducible physical properties that can be reliably controlled for industrial applications.

From a more mathematical point of view, it means that the free energy FN(β)=NfN(β) and its derivatives (magnetization, specific heat, susceptibility, etc.), in the limit N, these random quantities concentrates around a well defined value. These observables are called self-averaging. This means that, limNfN(β)=limNfNtyp(β)=limNfN(β)=f(β) Hence fN(β) becomes effectively deterministic and its sample-to sample fluctuations vanish in relative terms: limNfN2(β)(fN(β))2=1.

Glass Transition: the Edwards Anderson order parameter

Since J=0, the model does not exhibit spatial magnetic order, such as ferromagnetic or antiferromagnetic order. Instead, the idea is to distinguish between two phases:

  • Paramagnetic phase: Configurations are explored with all possible spin orientations.
  • Spin glass phase: Spin orientations are random but frozen (i.e., immobile).

The glass phase is characterized by long-range correlations in time, despite the absence of long-range correlations in space. The order parameter for this phase is:

qEA=limtlimN1Niσi(0)σi(t).

The quantity qEA measures the overlap of the spin configuration with itself after a long time.

In the paramagnetic phase, qEA=0, while in the spin glass phase, qEA>0.

This raises the question of whether the transition at Tf is truly thermodynamic in nature. Indeed, in the definition of the Edwards–Anderson parameter, time explicitly appears, and the magnetic susceptibility does not diverge at the freezing temperature Tf.

In ferromagnets, the divergence of the magnetic susceptibility is due to the fact that the magnetization M=iσi acts as the order parameter, distinguishing the ordered and disordered phases. In contrast, in spin glasses the magnetization vanishes in both phases, and the order parameter is qEA.

It can be shown that the susceptibility associated with qEA corresponds to the nonlinear susceptibility: MH=χ+a3H2+a5H4+

Here χ is the linear susceptibility, while a3,a5, are higher-order coefficients. Experiments show that a3 and a5 exhibit singular behavior, providing evidence for a thermodynamic transition at Tf.

Simpler models

Sherrington and Kirkpatrik (SK) Model

Sherrington and Kirkpatrik considered the fully connected version of the model with Gaussian couplings: E=i,jJijNσiσj At the inverse temperature β, the partion function of the model is Z=α=12Nzα,withzα=eβEα Here Eα is the energy associated to the configuration α.

Random Energy Model (REM)

The solution of the Sherrington-Kirkpatrick (SK) model is challenging. To make progress, we first study the Random Energy Model (REM), introduced by B. Derrida. This model simplifies the problem by neglecting correlations between the M=2N configurations and assuming that the energies Eα are independent and identically distributed (i.i.d.) random variables. Here, "independent" means that the energy of one configuration does not influence the energy of another, e.g., a configuration identical to the previous one except for a spin flip. "Identically distributed" indicates that all configurations follow the same probability distribution.

Energy Distribution: Show that the energy distribution is given by: p(Eα)=12πσM2exp(Eα22σM2)and determine that: σM2=N=logMlog2.

In the following, we present the original solution of the model. Here, we characterize the glassy phase by analyzing the statistical properties of the smallest energy values among the M=2N configurations. To address this, it is necessary to make a brief detour into the theory of extreme value statistics for i.i.d. random variables.

Detour: Extreme Value Statistics

Consider the REM spectrum of M energies E1,,EM drawn from a distribution p(E). It is useful to introduce the cumulative probability of finding an energy smaller than E P(E)=Edxp(x) We also define: Emin=min(E1,,EM),QM(E)Prob(Emin>E)

The statistical properties of Emin are derived using two key relations:

  • First relation:

P(Emintyp)=1/M This is an estimation of the typical value of the minimum. It is a crucial relation that will be used frequently.

  • Second relation:

QM(E)=(1P(E))M=eMlog(1P(E))exp(MP(E)) The first two steps are exact, but the resulting distribution depends on M and the precise form of p(E). In contrast, the last step is an approximation, valid when M×P(E)=O(1) and thus, for large M, when P(E)1. This second relation allows to express the random variable Emin in a scaling form: Emin=aM+bMz. The two parameters aM and bM are deterministic and M-dependent, while z is a random variable that is independent of M.

Extreme value statistics for Gaussian variables

We consider M independent random variables drawn from a Gaussian distribution with zero mean and variance σ2. Since the distribution is unbounded from below, the statistics of the minimum is controlled by the asymptotic behavior of the left tail. The technical analysis of the Gaussian tail is carried out in Exercise 1. Here we summarize the key results and emphasize their conceptual implications.


As shown in Exercise 1, the cumulative distribution can be written in the left tail (E) as P(E)=exp(A(E)),A(E)=E22σ2log(2π|E|σ)+


with

A(E)=Eσ2+


The typical minimum Emintyp is defined by the relation P(Emintyp)=1/M, namely A(Emintyp)=logM,


Keeping only the leading contribution A(E)E2/(2σ2), and neglecting the logarithmic term, one immediately obtains the leading scaling Emintypσ2logM.

A more careful analysis of the Gaussian tail, carried out in Exercise 1, allows one to extract the first subleading (logarithmic) correction, yielding Emintyp=σ2logM(114log(4πlogM)logM+).

Gumbel scaling of the minimum

The cumulative distribution of the minimum should be written using the second relation QM(E)exp(MP(E))=exp(MeA(E)).


For Gaussian variables, a natural choice for the centering constant is aMEmintyp,A(aM)=logM.


Expanding A(E) to first order around aM, A(E)A(aM)+A(aM)(EaM), one finds QM(E)exp[exp(A(aM)(EaM))].

This suggests introducing the scale bM=1A(aM)=σ2logM, and the rescaled variable z=EminaMbM. In the limit of large M, the distribution of z becomes M-independent and converges to the Gumbel law π(z)=exp(z)exp(ez).


Universality

The limiting distribution of properly centered and rescaled minima depends on the tail behavior of the parent distribution. There are three classical universality classes:

  • Gumbel class: distributions unbounded from below whose left tail decays faster than any power (e.g.\ Gaussian or exponential). This is the case treated here, for which the choice aM=Emintyp and bM=1/A(aM) leads to Gumbel scaling.
  • Weibull class: distributions with a finite lower bound. The minimum is controlled by the behavior near the edge of the support, and different choices of aM and bM are required to obtain a universal scaling form. This case is studied in Exercise 2.
  • Frechet class: distributions with heavy power-law left tails. In this case the minimum is controlled by rare events in the tail, leading to a different scaling behavior. In this course we will mainly focus on the Gumbel and Weibull classes.

Back to REM

In the REM, the variance of the energies scales with the system size as σM2=logMlog2=N. As a consequence, the minimum energy takes the form Emin=aM+bMz=2log2N+12log(4πNlog2)2log2+z2log2, where z is a Gumbel-distributed random variable.

Key observations:

  • At zero temperature (β=), the ground-state energy is self-averaging.

Its leading contribution is deterministic and extensive, FN(β=)2log2N.

  • Sample-to-sample fluctuations are N-independent, with a standard deviation bM=1/2log2.

Phase Transition in the Random Energy Model

The Random Energy Model (REM) exhibits two distinct phases:

  • High-temperature phase
At high temperature, the system is paramagnetic. The entropy is extensive and

each configuration is occupied with probability of order 1/M.

  • Low-temperature phase
Below a critical freezing temperature Tf, the system enters a glassy

phase. The entropy becomes subextensive (its extensive contribution vanishes), and only a finite number of configurations carry non-vanishing statistical weight.

Determination of the freezing temperature

We now use the result of Exercise 3, which states that for energies whose extreme value statistics belongs to the Gumbel universality class, the average number of states within an energy window x above the ground state is n(x)=ex/bM1, where bM is the scaling parameter of the minimum.

Let αmin denote the ground state, with energy Emin, and zα=eβEα the Boltzmann weights. We compare the total weight of all excited states to that of the ground state: αzαzαmin=1+ααmineβ(EαEmin). Replacing the discrete sum by an integral over energy differences x=EEmin, and using the result of Exercise 3, we obtain ααmineβ(EαEmin)0dxdn(x)dxeβx=0dx1bMex/bMeβx. The integral converges if and only if β>βf1bM.


Freezing transition

We thus identify the freezing temperature as Tf=1βf=bM. For T>Tf, the contribution of excited states dominates and the Gibbs measure is spread over an exponential number of configurations. For T<Tf, the integral diverges, signaling that only a finite number of lowest-energy states contribute to the partition function: the system is frozen into a glassy phase.

References

  • Spin glass i-vii, P.W. Anderson, Physics Today, 1988
  • Spin glasses: Experimental signatures and salient outcome, E. Vincent and V. Dupuis, Frustrated Materials and Ferroic Glasses 31 (2018).
  • Theory of spin glasses, S. F. Edwards and P. W. Anderson, J. Phys. F: Met. Phys. 5 965 (1975).
  • Non-linear susceptibility in spin glasses and disordered systems, H. Bouchiat, Journal of Physics: Condensed Matter, 9, 1811 (1997).
  • Solvable Model of a Spin-Glass, D. Sherrington and S. Kirkpatrick, Physical Review Letters, 35, 1792 (1975).
  • Random-Energy Model: An Exactly Solvable Model of Disordered Systems, B.Derrida,Physical Review B, 24, 2613 (1980).
  • Extreme value statistics of correlated random variables: a pedagogical review, S. N. Majumdar, A. Pal, and G. Schehr, Physics Reports 840, 1-32, (2020).