T-6: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
No edit summary
 
(96 intermediate revisions by the same user not shown)
Line 1: Line 1:
<strong>Goal: </strong> Trap model. Captures aging in a simplified single particle description.
<strong>Goal: </strong>  
Complete the characterisation of the energy landscape of the spherical <math>p</math>-spin.
<br>
<strong>Techniques: </strong> saddle point, random matrix theory.
<br>
<br>




<strong>Key concepts: </strong>  gradient descent, rugged landscapes, metastable states, Hessian matrices, random matrix theory, landscape’s complexity.
=== Langevin, Activation ===
- Monte Carlo dynamics.Langevin dynamics.
-Arrhenius law, trapping and activation.
-aging
The energy landscape of the REM. Consider the REM discussed in Problems 1. Assume that the <math> 2^N </math> configurations are organised in an hypercube of connectivity <math> N </math>: each configuration has <math> N </math> neighbours, that are obtained flipping one spin of the first configuration.


=== Problem 6.1: a simple model for aging ===
== Problems ==


[[File:Trap.png|thumb|right|x160px|Traps in the trap model.]]
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===
The trap model is an abstract model for the dynamics in complex landscapes introduced in <sup>[[#Notes|[1] ]]</sup>. The configuration space is a collection of <math> M \gg 1 </math> traps labeled by <math> \alpha </math> having random depths/energies (see sketch). The dynamics is a sequence of jumps between the traps: the system spends in a trap <math> \alpha </math> an exponentially large time with average <math> \tau_\alpha</math> (the probability to jump out of the trap in time <math> [t, t+dt]</math> is <math> dt/\tau_\alpha </math>.). When the system exits the trap, it jumps into another one randomly chosen among the <math> M</math>. The average times are distributed as
<center><math> P_m(\tau)= \frac{m \tau_0^m}{\tau^{1+m}} \quad \quad \tau \geq \tau_0 </math></center>
where <math> m </math> is a parameter.  In this exercise, we aim at understanding the main features of this dynamics. <br>


This is a continuation of problem 5. To get the complexity of the spherical <math>p</math>-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory.




<ol>
<ol>
<li> <em> Aging.</em> Compute the average trapping time (between the various traps) and show that there is a critical value of <math> m </math> below which it diverges, signalling a non-ergodic phase (the system needs infinite time to explore the whole configuration space). Consider a dynamics running from time <math>t_w</math> to some later time <math> t_w+ t</math>: compute the typical value of the maximal trapping time <math> \tau_{\text{max}}(t) </math> encountered in this time interval, assuming that the system has spent time <math> \tau_\alpha </math> in each visited trap <math> \alpha </math>. Show that in the non-ergodic phase <math> \tau_{\text{max}}(t) \sim t </math>. Why is this interpretable as a condensation phenomenon, as the ones discussed in Problems 1? Why is this an indication of aging?
<li> <em> Gaussian Random matrices. </em> Show that the matrix <math> M </math> is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution
</li> <br>
<math>
P_N(M)= Z_N^{-1}e^{-\frac{N}{4 \sigma^2} \text{Tr} M^2}
</math>
where <math> Z_N </math> is a normalization. What is the value of <math> \sigma^2 </math>?  
</li>
</ol>
<br>




<li> <em>Correlation functions and weak ergodicity breaking.</em> Assume now that the trap represent a collection of microscopic configurations having self overlap  <math>q_{EA}</math>. Assume that the overlap between configurations of different traps is <math> q_0 </math>. Justify why the correlation function can be written as
<ol start="2">
<center>  
<li><em> Eigenvalue density and concentration. </em> Let <math> \lambda_\alpha </math> be the eigenvalues of the matrix <math> M </math>. Show that the following identity holds:
<center>
<math>
<math>
C(t_w + t, t_w)= q_{EA} \Pi(t, t_w)+ q_0 \left(1-\Pi(t, t_w)\right), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].
\overline{|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|}= \overline{\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]}, \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)
</math>
</math>
</center>
</center>
In the non-ergodic regime, one finds:
where <math>\rho_{N-1}(\lambda)</math> is the empirical eigenvalue distribution. It can be shown that if <math> M </math> is a GOE matrix, the distribution of the empirical distribution has a large deviation form (recall TD1) with speed <math> N^2 </math>, meaning that <math> P_N[\rho] = e^{-N^2 \, g[\rho]} </math> where now <math> g[\cdot] </math> is a functional. Using a saddle point argument, show that this implies
<center> <math>
<center>
\Pi(t, t_w)= \frac{\sin (\pi m)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{m-1}u^{-m}.
<math>
\overline{\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]}=\text{exp} \left[N \left( \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|\right)+ o(N) \right]
</math>
</math>
</center>
</center>
Why is this, again, an indication of aging?
where <math> \rho_\infty(\lambda) </math> is the typical value of the eigenvalue density, which satisfies  <math> g[\rho_\infty]=0 </math>.
Study the asymptotic behaviour of the correlation function for <math> t \ll t_w </math> and <math> t \gg t_w </math>. Show that
</li>
<center> <math>
\lim_{t \to \infty} C(t_w + t, t_w)=q_0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=q_{EA} \quad \text{ for finite }t
</math>
</center>
When <math> q_0=0</math>, this behaviour is called "weak ergodicity breaking". </li>
</ol>
</ol>
<br>
<br>


=== Problem 6.2: from traps to landscapes ===
In this exercise, we aim at understanding why the trap model is a good effective model for activated dynamics in glassy landscapes. In particular, we will discuss connections with the landscape structure of two models that we have studied so far: the REM and spherical p-spin model. <br>


<ol>
<ol start="3">
<li> <em> REM: the golf course landscape.</em> In the REM, the smallest energies values <math> E_\alpha </math> among the <math> M=2^N </math> can be assumed to be distributed as
<li><em> The semicircle and the complexity.</em> The eigenvalue density of GOE matrices is self-averaging, and it equals to  
<center>
<center>
<math>
<math>
P_N^{\text{extrm}}(E) = C_N \text{exp}\left[2 \sqrt{\log 2}(E+ N \sqrt{\log 2}) \right], \quad \quad E<0, \quad \quad C_N \text{ normalization}
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \overline{\rho_N}(\lambda)= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }
</math>
</math>
</center>
</center>
Justify the form of this distribution (Hint: recall the discussion on extreme value statistics in Lecture 1!). Consider now one of these deep configurations of energy close to that of the ground state: what is the minimal energy among the <math> N </math> neighbouring configurations, which differs from the previous one by a spin flip? Does it depend on the energy of the original one? Why is this consistent with the results on the entropy of the REM that we computed in Problem 1.1?  </li> <br>
<ul>
<li>Check this numerically: generate matrices for various values of <math> N </math>, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  </li>




<li> <em> REM: trapping times.</em> The results above show that the energy landscape of the REM has a "golf course" structure: configuration with deep energy are isolated, surrounded by configurations of much higher energy (zero energy density). The Arrhenius law states that the time needed for the system to escape from a trap of energy <math> E </math> and reach a configuration of zero energy is <math> \tau \sim e^{\beta |E| } </math>. This is a trapping time. Given the energy distribution <math> P_N(E) </math>, determine the distribution of trapping times <math> P_m(\tau) </math>: what is <math> m</math>? What is the critical temperature? </li><br>


<li> <em> p-spin: the “trap” picture and the assumptions behind.</em> In Problems 5, we have seen that the energy landscape of the spherical p-spin is characterized by the threshold energy, below which plenty of minima appear. The region below the threshold is explored by activated dynamics. Explain why the tap model corresponds to the following picture for the dynamics: the system is trapped into minima for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. Do you see any assumption behind the trap description that is not straightforwardly justified in the p-spin case?
 
</li>
<li> Combining all the results, show that the annealed complexity is
<center> <math>
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.
</math> </center>
The integral <math>  I_p(\epsilon)</math> can be computed explicitly, and one finds:
<center> <math>
I_p(\epsilon)=
\begin{cases}
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon > \epsilon_{\text{th}}
\end{cases}
</math> </center>
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?
</li>
</ul>
</ol>
</ol>
<br>


=== Check out: key concepts of this TD ===
<ol start="4">
 
<li><em> The threshold and the stability.</em>
Annealed vs quenched averages, replica trick, fully-connected models, order parameters, the three steps of a replica calculation.
Sketch <math> \rho_\infty(\lambda+p \epsilon) </math> for different values of <math> \epsilon </math>; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density
 
<math>
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.
</math>
When are the critical points stable local minima? When are they saddles? Why the stationary points at <math> \epsilon= \epsilon_{\text{th}}</math> are called  <em> marginally stable </em>?
</li>
</ol>
<br>


== Check out: key concepts ==


=== References ===
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]

Latest revision as of 15:54, 2 March 2025

Goal: Complete the characterisation of the energy landscape of the spherical -spin.
Techniques: saddle point, random matrix theory.


Problems

Problem 6: the Hessian at the stationary points, and random matrix theory

This is a continuation of problem 5. To get the complexity of the spherical -spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory.


  1. Gaussian Random matrices. Show that the matrix is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution where is a normalization. What is the value of ?



  1. Eigenvalue density and concentration. Let be the eigenvalues of the matrix . Show that the following identity holds:

    where is the empirical eigenvalue distribution. It can be shown that if is a GOE matrix, the distribution of the empirical distribution has a large deviation form (recall TD1) with speed , meaning that where now is a functional. Using a saddle point argument, show that this implies

    where is the typical value of the eigenvalue density, which satisfies .



  1. The semicircle and the complexity. The eigenvalue density of GOE matrices is self-averaging, and it equals to

    • Check this numerically: generate matrices for various values of , plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?
    • Combining all the results, show that the annealed complexity is

      The integral can be computed explicitly, and one finds:

      Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?


  1. The threshold and the stability. Sketch for different values of ; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density When are the critical points stable local minima? When are they saddles? Why the stationary points at are called marginally stable ?


Check out: key concepts

Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.