T-I: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
Line 1: Line 1:
=== The REM: the energy landscape ===
=== The REM: the energy landscape ===


To characterize the energy landscape of the REM, we determine the number <math> \mathcal{N}(E)dE </math> of configurations having energy  <math> E_\alpha \in [E, E+dE] </math>. This quantity is a random variable. For large <math> N </math>, its typical value is given by
To characterize the energy landscape of the REM, we determine the number <math> \mathcal{N}(E)dE </math> of configurations having energy  <math> E_\alpha \in [E, E+dE] </math>. This quantity is a random variable. For large <math> N </math>, its <em> typical value </em> is given by


<center><math>
<center><math>
Line 15: Line 15:




*We begin by computing the average  <math> \overline{\mathcal{N}(E)} </math>. We set <math> \overline{\mathcal{N}(E)}= e^{N \Sigma^A\left( \frac{E}{N} \right)+ o(N)} </math>, where  <math> \Sigma^A </math> is the annealed entropy. Write <math> \mathcal{N}(E)dE= \sum_{\alpha=1}^{2^N} \chi_\alpha(E) dE </math> with <math> \chi_\alpha(E)=1</math> if  <math> E_\alpha \in [E, E+dE]</math> and  <math> \chi_\alpha(E)=0</math> otherwise. Use this together with <math> p(E)</math> to obtain <math> \Sigma^A </math> : when does this coincide with the entropy?
* <em> The annealed entropy.</em> We begin by computing the average  <math> \overline{\mathcal{N}(E)} </math>. We set <math> \overline{\mathcal{N}(E)}= e^{N \Sigma^A\left( \frac{E}{N} \right)+ o(N)} </math>, where  <math> \Sigma^A </math> is the annealed entropy. Write <math> \mathcal{N}(E)dE= \sum_{\alpha=1}^{2^N} \chi_\alpha(E) dE </math> with <math> \chi_\alpha(E)=1</math> if  <math> E_\alpha \in [E, E+dE]</math> and  <math> \chi_\alpha(E)=0</math> otherwise. Use this together with <math> p(E)</math> to obtain <math> \Sigma^A </math> : when does this coincide with the entropy?


* For  <math> |\epsilon| \leq  \sqrt{\log 2} </math> the quantity <math> \mathcal{N}(E) </math> is self-averaging. This means that its distribution concentrates around the average value <math> \overline{\mathcal{N}}(E) </math> when  <math> N \to \infty </math>. Show that this is the case by computing the second moment  <math> \overline{\mathcal{N}^2} </math> and using the central limit theorem. Show that  this is no longer true in the region where the annealed entropy is negative.
* For  <math> |\epsilon| \leq  \sqrt{\log 2} </math> the quantity <math> \mathcal{N}(E) </math> is self-averaging. This means that its distribution concentrates around the average value <math> \overline{\mathcal{N}}(E) </math> when  <math> N \to \infty </math>. Show that this is the case by computing the second moment  <math> \overline{\mathcal{N}^2} </math> and using the central limit theorem. Show that  this is no longer true in the region where the annealed entropy is negative.

Revision as of 17:07, 24 November 2023

The REM: the energy landscape

To characterize the energy landscape of the REM, we determine the number of configurations having energy . This quantity is a random variable. For large , its typical value is given by

The function is the entropy of the model, and it is sketched in Fig. X. The point where the entropy vanishes, , is the energy density of the ground state, consistently with what we obtained with extreme values statistics. The entropy is maximal at : the highest number of configurations have vanishing energy density.


  • The annealed entropy. We begin by computing the average . We set , where is the annealed entropy. Write with if and otherwise. Use this together with to obtain  : when does this coincide with the entropy?
  • For the quantity is self-averaging. This means that its distribution concentrates around the average value when . Show that this is the case by computing the second moment and using the central limit theorem. Show that this is no longer true in the region where the annealed entropy is negative.
  • For the annealed entropy is negative. This means that configurations with those energy are exponentially rare: the probability to find one is exponentially small in . Do you have an idea of how to show this, using the expression for ? Why the entropy is zero in this region? Why the point where the entropy vanishes coincides with the ground state energy of the model?


this will be responsible of the fact that the partition function is not self-averaging in the low-T phase, as we discuss below.

The free energy and the freezing transition

Let us compute the free energy of the REM. The partition function reads

We have shown above the behaviour of the typical value of for large . The typical value of the partition function is

In the limit of large , this integral can be computed with the saddle point method, and one gets

Using the expression of the entropy, we see that the function is stationary at , which belongs to the domain of integration whenever . This temperature identifies a transition point: for all values of , the stationary point is outside the domain and thus has to be chosen at the boundary of the domain, .

The free energy becomes