LBan-1: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
Line 15: Line 15:
The results obtained from a saddle-point approximation can be recovered using the tools of extreme value statistics. In the REM, the low-temperature phase is governed by the minimum of a large set of independent energy values.
The results obtained from a saddle-point approximation can be recovered using the tools of extreme value statistics. In the REM, the low-temperature phase is governed by the minimum of a large set of independent energy values.


=Part I=
== Random energy landascape ==
== Random energy landascape ==
In a system with <math>N</math> degrees of freedom, the number of configurations grows exponentially with <math>N</math>. For simplicity, consider Ising spins that take two values, <math>\sigma_i = \pm 1</math>, located on a lattice of size <math>L</math> in <math>d</math> dimensions. In this case, <math>N = L^d</math> and the number of configurations is <math>M = 2^N = e^{N \log 2}</math>.
In a system with <math>N</math> degrees of freedom, the number of configurations grows exponentially with <math>N</math>. For simplicity, consider Ising spins that take two values, <math>\sigma_i = \pm 1</math>, located on a lattice of size <math>L</math> in <math>d</math> dimensions. In this case, <math>N = L^d</math> and the number of configurations is <math>M = 2^N = e^{N \log 2}</math>.
Line 28: Line 29:
To illustrate this, consider a single configuration, for example the one where all spins are up. The energy of this configuration is given by the sum of all the couplings between neighboring spins:
To illustrate this, consider a single configuration, for example the one where all spins are up. The energy of this configuration is given by the sum of all the couplings between neighboring spins:
<center><math> E[\sigma_1=1,\sigma_2=1,\ldots] = - \sum_{\langle i, j \rangle} J_{ij}. </math></center> Since the the couplings are random, the energy associated with this particular configuration is itself a Gaussian random variable, with zero mean and a variance proportional to the number of terms in the sum — that is, of order <math>N</math>.  The same reasoning applies to each of the <math>M = 2^N</math> configurations. So, in a disordered system, the entire energy landscape is random and sample-dependent.
<center><math> E[\sigma_1=1,\sigma_2=1,\ldots] = - \sum_{\langle i, j \rangle} J_{ij}. </math></center> Since the the couplings are random, the energy associated with this particular configuration is itself a Gaussian random variable, with zero mean and a variance proportional to the number of terms in the sum — that is, of order <math>N</math>.  The same reasoning applies to each of the <math>M = 2^N</math> configurations. So, in a disordered system, the entire energy landscape is random and sample-dependent.
=Part II=


== Self-averaging observables ==
== Self-averaging observables ==

Revision as of 17:59, 3 August 2025


Overview

This lesson is structured in three parts:

  • Self-averaging and disorder in statistical systems

Disordered systems are characterized by a random energy landscape, where the microscopic details vary from sample to sample. However, in the thermodynamic limit, physical observables become deterministic. This property is known as self-averaging. This is in general not the case for the partition function:

  • The Random Energy Model

We study the Random Energy Model (REM) introduced by Bernard Derrida. In this model at each configuration is assigned an independent energy drawn from a Gaussian distribution. The model exhibits a freezing transition at a critical temperature​, below which the free energy becomes dominated by the lowest energy states.

  • Extreme value statistics and saddle-point analysis

The results obtained from a saddle-point approximation can be recovered using the tools of extreme value statistics. In the REM, the low-temperature phase is governed by the minimum of a large set of independent energy values.

Part I

Random energy landascape

In a system with degrees of freedom, the number of configurations grows exponentially with . For simplicity, consider Ising spins that take two values, , located on a lattice of size in dimensions. In this case, and the number of configurations is .

In the presence of disorder, the energy associated with a given configuration becomes a random quantity. For instance, in the Edwards-Anderson model:

where the sum runs over nearest neighbors , and the couplings are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance.

The energy of a given configuration is a random quantity because each system corresponds to a different realization of the disorder. In an experiment, this means that each of us has a different physical sample; in a numerical simulation, it means that each of us has generated a different set of couplings .


To illustrate this, consider a single configuration, for example the one where all spins are up. The energy of this configuration is given by the sum of all the couplings between neighboring spins:

Since the the couplings are random, the energy associated with this particular configuration is itself a Gaussian random variable, with zero mean and a variance proportional to the number of terms in the sum — that is, of order . The same reasoning applies to each of the configurations. So, in a disordered system, the entire energy landscape is random and sample-dependent.

Part II

Self-averaging observables

A crucial question is whether the physical properties measured on a given sample are themselves random or not. Our everyday experience suggests that they are not: materials like glass, ceramics, or bronze have well-defined, reproducible physical properties that can be reliably controlled for industrial applications.

From a more mathematical point of view, it means tha physical observables — such as the free energy and its derivatives (magnetization, specific heat, susceptibility, etc.) — are self-averaging. This means that, in the limit , the distribution of the observable concentrates around its average:

Hence macroscopic observables become effectively deterministic and their fluctuations from sample to sample vanish in relative terms: