T-I: Difference between revisions
Line 78: | Line 78: | ||
</center> | </center> | ||
In fact, it always holds <math> \overline{\log X_N} \leq \log \overline{X_N}</math> because of the concavity of the logarithm. | In fact, it always holds <math> \overline{\log X_N} \leq \log \overline{X_N}</math> because of the concavity of the logarithm. | ||
When the inequality is strict and quenched and annealed averages are not the same, it means that <math> X_N </math> is not self-averaging, and its average value is exponentially larger than the typical value (because the average is dominated by rare events). In this case, to get the correct limit of the self-averaging quantity <math> Y_N </math> one has to perform the quenched average.<sup>[[#Notes|[**] ]]</sup> This is what happens in | When the inequality is strict and quenched and annealed averages are not the same, it means that <math> X_N </math> is not self-averaging, and its average value is exponentially larger than the typical value (because the average is dominated by rare events). In this case, to get the correct limit of the self-averaging quantity <math> Y_N </math> one has to perform the quenched average.<sup>[[#Notes|[**] ]]</sup> This is what happens in glassy phases. | ||
</li> | </li> | ||
<br> | <br> |
Revision as of 13:26, 26 January 2025
Goal: understanding the energy landscape of the simplest spin-glass model, the Random Energy Model (REM).
Techniques: probability theory, saddle point approximation.
A dictionary for large-N disordered systems
- We will discuss disordered systems with degrees of freedom (for instance, for a spin system on a lattice of size in dimension , ). Since the systems are random, the quantities that describe their properties (the free energy, the number of configurations of the system that satisfy a certain property, the magnetization etc) are also random variables, with a distribution. In this discussion we denote these random variables generically with (where the subscript denotes the number of degrees of freedom) and with their distribution. Statistical physics goal is to characterize the behavior of these quantities in the limit .
- Self-averagingness. The physics of disordered systems is described by quantities that are distributed when is finite (they take different values from sample to sample of the system), but for which sample to sample fluctuations are suppressed when . These quantities are said to be self-averaging .
A random variable is self-averaging when, in the limit , its distribution concentrates around the average, collapsing to a deterministic value:
This happens when its fluctuations are small compared to the average, meaning that [*]
When the random variable is not self-averaging, it remains distributed in the limit . When it is self-averaging, sample-to-sample fluctuations are suppressed when is large.
Example 1. Consider the partition function of a disordered system at inverse temperature , . When is large this random variable has an exponential scaling, , where the variable is the free energy density. This scaling means that the random variable has a well defined distribution that remains of when . In all the disordered systems models we will consider in these lectures, the free-energy not only has a well defined distribution in the limit, but it is also self-averaging. This is very important property: it implies that the free energy (and therefore all the thermodynamics observables, that can be obtained taking derivatives of the free energy) does not fluctuate from sample to sample when is large, and so the physics of the system does not depend on the particular sample. While intensive quantities like are self-averaging, quantities scaling exponentially like the partition function are not necessarily so: in particular, we will see that they are not when the system is in a glassy phase.
Example 2. The partition function is an example of exponentially-scaling variable , where the rescaled variable is self-averaging while may not be. Another example is given in Problem 1 below, where and .
- Typical and rare. The typical value of a random variable is the value at which its distribution peaks (it is the most probable value). Values at the tails of the distribution, where the probability density does not peak but it is small (for instance, vanishing with ) are said to be rare. For self-averaging quantities, in the limit the distribution collapses to a single value, that is both the average and typical value. In general, average and typical value of a random variable may not coincide: this happens when the average is dominated by values that are rare, associated to a small probability of occurrence and thus to the tails of the distribution. Let’s see this with an example.
- Example: typical vs average. Often, quantities like have a distribution that for large takes the form where is some positive function and . This is called a large deviation form for the probability distribution, with speed . This distribution is of for the value such that : this value is the typical value of (asymptotically at large ); all the other values of are associated to a probability that is exponentially small in : they are exponentially rare.
Consider now an exponentially scaling quantity like , and let’s fix . The asymptotic typical values and are related by:
so the scaling of is . Let us now look at the scaling of the average. The average of can be computed with the saddle point approximation for large :
where is the point maximising the shifted function . In this example, : the asymptotic of the average value of is different from the asymptotic of the typical value. In particular, the average is dominated by rare events, i.e. realisations in which takes the value , whose probability of occurrence is exponentially small.
- Quenched averages. Let us go back to : how to get from it? When is self-averaging,
where in the last line we have used that .
In the language of disordered systems, computing the typical value of through the average of its logarithm corresponds to performing a quenched average: from this average, one extracts the correct asymptotic value of the self-averaging quantity . - Annealed averages. The quenched average does not necessarily coincide with the annealed average, defined as:
In fact, it always holds because of the concavity of the logarithm. When the inequality is strict and quenched and annealed averages are not the same, it means that is not self-averaging, and its average value is exponentially larger than the typical value (because the average is dominated by rare events). In this case, to get the correct limit of the self-averaging quantity one has to perform the quenched average.[**] This is what happens in glassy phases.
- [*] - See here for a note on the equivalence of these two criteria.
- [**] - Notice that the opposite is not true: one can have situations in which the partition function is not self-averaging, but still the quenched free energy coincides with the annealed one.
Problems
This problem and the one of next week deal with the Random Energy Model (REM). The REM has been introduced in [1] . In the REM the system can take configurations with . To each configuration is assigned a random energy . The random energies are independent, taken from a Gaussian distribution
Problem 1: the energy landscape of the REM
In this problem we study the random variable , that is the number of configurations having energy . For large this variable scales exponentially . Let . Through this exercise we show that the asymptotic value of the entropy is given by:
The point where the entropy vanishes, , is the energy density of the ground state. The entropy is maximal at : the highest number of configurations have vanishing energy density.
- Averages: the annealed entropy. We begin by computing the annealed entropy , which is defined by the average . Compute this function using the representation [with if and otherwise].
- Self-averaging. For the quantity is self-averaging: its distribution concentrates around the average value when . Show this by computing the second moment . Deduce that when . This property of being self-averaging is no longer true in the region where the annealed entropy is negative: why does one expect fluctuations to be relevant in this region?
- Rare events. For the annealed entropy is negative: the average number of configurations with those energy densities is exponentially small in . This implies that the probability to get configurations with those energy is exponentially small in : these configurations are rare. Do you have an idea of how to show this, using the expression for What is the typical value of in this region? Putting everything together, derive the form of the typical value of the entropy density. Why the point where the entropy vanishes coincides with the ground state energy of the model?
Check out: key concepts
Self-averaging, average value vs typical value, large deviations, rare events, saddle point approximation.
To know more
- Derrida. Random-energy model: limit of a family of disordered models [1]
- A note on terminology:
The terms “quenched” and “annealed” come from metallurgy and refer to the procedure in which you cool a very hot piece of metal: a system is quenched if it is cooled very rapidly (istantaneously changing its environment by putting it into cold water, for instance) and has to adjusts to this new fixed environment; annealed if it is cooled slowly, kept in (quasi)equilibrium with its changing environment at all times. Think now at how you compute the free energy, and at disorder as the environment. In the quenched protocol, you compute the average over configurations of the system keeping the disorder (environment) fixed, so the configurations have to adjust to the given disorder. Then you take the log and only afterwards average over the randomness (not even needed, at large , if the free-energy is self-averaging). In the annealed protocol instead, the disorder (environment) and the configurations are treated on the same footing and adjust to each others, you average over both simultaneously.