LBan-1: Difference between revisions
Line 6: | Line 6: | ||
* '''Self-averaging and disorder in statistical systems''' | * '''Self-averaging and disorder in statistical systems''' | ||
Disordered systems are characterized by a random energy landscape, where the microscopic details vary from sample to sample. However, in the thermodynamic limit, physical observables become deterministic. This property is known as '''self-averaging'''. This is in general not the case for the partition function: | Disordered systems are characterized by a random energy landscape, where the microscopic details vary from sample to sample. However, in the thermodynamic limit, physical observables become deterministic. This property is known as '''self-averaging'''. This is in general not the case for the partition function: <math>\overline{Z} </math> | ||
<math>\overline{Z} </math> | |||
* '''The Random Energy Model''' | * '''The Random Energy Model''' |
Revision as of 17:56, 3 August 2025
Overview
This lesson is structured in three parts:
- Self-averaging and disorder in statistical systems
Disordered systems are characterized by a random energy landscape, where the microscopic details vary from sample to sample. However, in the thermodynamic limit, physical observables become deterministic. This property is known as self-averaging. This is in general not the case for the partition function:
- The Random Energy Model
We study the Random Energy Model (REM) introduced by Bernard Derrida. In this model at each configuration is assigned an independent energy drawn from a Gaussian distribution. The model exhibits a freezing transition at a critical temperature, below which the free energy becomes dominated by the lowest energy states.
- Extreme value statistics and saddle-point analysis
The results obtained from a saddle-point approximation can be recovered using the tools of extreme value statistics. In the REM, the low-temperature phase is governed by the minimum of a large set of independent energy values.
Random energy landascape
In a system with degrees of freedom, the number of configurations grows exponentially with . For simplicity, consider Ising spins that take two values, , located on a lattice of size in dimensions. In this case, and the number of configurations is .
In the presence of disorder, the energy associated with a given configuration becomes a random quantity. For instance, in the Edwards-Anderson model:
where the sum runs over nearest neighbors , and the couplings are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance.
The energy of a given configuration is a random quantity because each system corresponds to a different realization of the disorder. In an experiment, this means that each of us has a different physical sample; in a numerical simulation, it means that each of us has generated a different set of couplings .
To illustrate this, consider a single configuration, for example the one where all spins are up. The energy of this configuration is given by the sum of all the couplings between neighboring spins:
Since the the couplings are random, the energy associated with this particular configuration is itself a Gaussian random variable, with zero mean and a variance proportional to the number of terms in the sum — that is, of order . The same reasoning applies to each of the configurations. So, in a disordered system, the entire energy landscape is random and sample-dependent.
Self-averaging observables
A crucial question is whether the physical properties measured on a given sample are themselves random or not. Our everyday experience suggests that they are not: materials like glass, ceramics, or bronze have well-defined, reproducible physical properties that can be reliably controlled for industrial applications.
From a more mathematical point of view, it means tha physical observables — such as the free energy and its derivatives (magnetization, specific heat, susceptibility, etc.) — are self-averaging. This means that, in the limit , the distribution of the observable concentrates around its average:
Hence macroscopic observables become effectively deterministic and their fluctuations from sample to sample vanish in relative terms: