T-I: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
Line 1: Line 1:
=== The REM: the energy landscape ===
=== The REM: the energy landscape ===


To characterize the energy landscape of the REM, we determine the number <math> \mathcal{N}(E)dE </math> of configurations having energy  <math> E_\alpha \in [E, E+dE] </math>. This quantity is a random variable. For large <math> N </math>, its typical value is given by
To characterize the energy landscape of the REM, we determine the number <math> \mathcal{N}(E)dE </math> of configurations having energy  <math> E_\alpha \in [E, E+dE] </math>. This quantity is a random variable. For large <math> N </math>, its <em> typical value </em> is given by


<center><math>
<center><math>
Line 15: Line 15:




*We begin by computing the average  <math> \overline{\mathcal{N}(E)} </math>. We set <math> \overline{\mathcal{N}(E)}= e^{N \Sigma^A\left( \frac{E}{N} \right)+ o(N)} </math>, where  <math> \Sigma^A </math> is the annealed entropy. Write <math> \mathcal{N}(E)dE= \sum_{\alpha=1}^{2^N} \chi_\alpha(E) dE </math> with <math> \chi_\alpha(E)=1</math> if  <math> E_\alpha \in [E, E+dE]</math> and  <math> \chi_\alpha(E)=0</math> otherwise. Use this together with <math> p(E)</math> to obtain <math> \Sigma^A </math> : when does this coincide with the entropy?
* <em> The annealed entropy.</em> We begin by computing the average  <math> \overline{\mathcal{N}(E)} </math>. We set <math> \overline{\mathcal{N}(E)}= e^{N \Sigma^A\left( \frac{E}{N} \right)+ o(N)} </math>, where  <math> \Sigma^A </math> is the annealed entropy. Write <math> \mathcal{N}(E)dE= \sum_{\alpha=1}^{2^N} \chi_\alpha(E) dE </math> with <math> \chi_\alpha(E)=1</math> if  <math> E_\alpha \in [E, E+dE]</math> and  <math> \chi_\alpha(E)=0</math> otherwise. Use this together with <math> p(E)</math> to obtain <math> \Sigma^A </math> : when does this coincide with the entropy?


* For  <math> |\epsilon| \leq  \sqrt{\log 2} </math> the quantity <math> \mathcal{N}(E) </math> is self-averaging. This means that its distribution concentrates around the average value <math> \overline{\mathcal{N}}(E) </math> when  <math> N \to \infty </math>. Show that this is the case by computing the second moment  <math> \overline{\mathcal{N}^2} </math> and using the central limit theorem. Show that  this is no longer true in the region where the annealed entropy is negative.
* For  <math> |\epsilon| \leq  \sqrt{\log 2} </math> the quantity <math> \mathcal{N}(E) </math> is self-averaging. This means that its distribution concentrates around the average value <math> \overline{\mathcal{N}}(E) </math> when  <math> N \to \infty </math>. Show that this is the case by computing the second moment  <math> \overline{\mathcal{N}^2} </math> and using the central limit theorem. Show that  this is no longer true in the region where the annealed entropy is negative.

Revision as of 17:07, 24 November 2023

The REM: the energy landscape

To characterize the energy landscape of the REM, we determine the number 𝒩(E)dE of configurations having energy Eα[E,E+dE]. This quantity is a random variable. For large N, its typical value is given by

𝒩(E)=eNΣ(EN)+o(N),Σ(ϵ)={log2ϵ2 if |ϵ|log20 if |ϵ|>log2.

The function Σ(ϵ) is the entropy of the model, and it is sketched in Fig. X. The point where the entropy vanishes, ϵ=log2, is the energy density of the ground state, consistently with what we obtained with extreme values statistics. The entropy is maximal at ϵ=0: the highest number of configurations have vanishing energy density.


  • The annealed entropy. We begin by computing the average 𝒩(E). We set 𝒩(E)=eNΣA(EN)+o(N), where ΣA is the annealed entropy. Write 𝒩(E)dE=α=12Nχα(E)dE with χα(E)=1 if Eα[E,E+dE] and χα(E)=0 otherwise. Use this together with p(E) to obtain ΣA : when does this coincide with the entropy?
  • For |ϵ|log2 the quantity 𝒩(E) is self-averaging. This means that its distribution concentrates around the average value 𝒩(E) when N. Show that this is the case by computing the second moment 𝒩2 and using the central limit theorem. Show that this is no longer true in the region where the annealed entropy is negative.
  • For |ϵ|>log2 the annealed entropy is negative. This means that configurations with those energy are exponentially rare: the probability to find one is exponentially small in N. Do you have an idea of how to show this, using the expression for 𝒩(E)? Why the entropy is zero in this region? Why the point where the entropy vanishes coincides with the ground state energy of the model?


this will be responsible of the fact that the partition function Z is not self-averaging in the low-T phase, as we discuss below.

The free energy and the freezing transition

Let us compute the free energy f of the REM. The partition function reads

eβNf+o(N)=Z=α=12NeβEα=dE𝒩(E)eβE

We have shown above the behaviour of the typical value of 𝒩 for large N. The typical value of the partition function is

Z=Nlog2Nlog2dE𝒩(E)eβE=log2log2dϵeN[Σ(ϵ)βϵ]+o(N)

In the limit of large N, this integral can be computed with the saddle point method, and one gets

Z=eN[Σ(ϵ*)βϵ*]+o(N),ϵ*=argmax|ϵ|log2(Σ(ϵ)βϵ)

Using the expression of the entropy, we see that the function is stationary at ϵ*=1/2T, which belongs to the domain of integration whenever TTc=1/(2log2). This temperature identifies a transition point: for all values of T<Tc, the stationary point is outside the domain and thus ϵ* has to be chosen at the boundary of the domain, ϵ*=log2.

The free energy becomes

f=1βlimNlogZN={(Tlog2+14T)ifTTclog2ifT<TcTc=12log2