T-6: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
No edit summary
 
(152 intermediate revisions by the same user not shown)
Line 1: Line 1:
<strong>Goal: </strong> Trap model. Captures aging in a simplified single particle description.
<strong>Goal: </strong>  
Complete the characterisation of the energy landscape of the spherical <math>p</math>-spin.
<br>
<strong>Techniques: </strong> saddle point, random matrix theory.
<br>
<br>




<strong>Key concepts: </strong>  gradient descent, rugged landscapes, metastable states, Hessian matrices, random matrix theory, landscape’s complexity.
=== Langevin, Activation ===
- Monte Carlo dynamics. Consider the REM discussed in Problems 1. Assume that the <math> 2^N </math> configurations are organised in an hypercube of connectivity <math> N </math>: each configuration has <math> N </math> neighbours, that are obtained flipping one spin of the first configuration.
-Langevin dynamics
-Arrhenius law, trapping and activation.


The energy landscape of the REM.
== Problems ==


-aging
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===


=== Problem 6.1: from the REM to the trap model ===
This is a continuation of problem 5. To get the complexity of the spherical <math>p</math>-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory.




<ol>
<ol>
<li> <em> The golf course energy landscape.</em> The smallest energies values <math> E_\alpha </math> among the <math> M=2^N </math>, those with energy density <math> \epsilon =-\sqrt{\log 2} </math>, can be assumed to be distributed as
<li> <em> Gaussian Random matrices. </em> Show that the matrix <math> M </math> is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution
<center><math>
<math>
P_N(E) \approx C_N \text{exp}\left[2 \sqrt{\log 2}(E+ N \sqrt{\log 2})  \right], \quad \quad E<0
P_N(M)= Z_N^{-1}e^{-\frac{N}{4 \sigma^2} \text{Tr} M^2}
</math></center>,
</math>
where <math> C_N </math> is a normalisation factor. Justify the form of this distribution (Hint: recall the discussion on extreme value statistics in Lecture 1!). Consider now one of these deep configurations of very small energy, which we will call <em< traps </em>: what is the minimal energy density among the neighbouring configurations of the trap? Does it depend on the entry of the trap itself? Why is this consistent with the results on the entropy of the REM that we computed in Problem 1.1?  </li> <br>
where <math> Z_N </math> is a normalization. What is the value of <math> \sigma^2 </math>?  
 
</li>
</ol>
<br>


<li> <em> Trapping times.</em> The results above show that the energy landscape of the REM has a "golf course" structure: configuration with deep energy are isolated, surrounded by configurations of much higher energy (zero energy density). The Arrhenius law states that the time needed for the system to escape from a trap of energy <math> E </math> and reach a configuration of zero energy is <math> \tau \sim e^{-\beta E } </math>. We call this a <em> trapping time </em>. Given the energy distribution <math> P_N(E) </math>, determine the distribution of trapping times <math> P(\tau) </math>. Show that there is a critical temperature below which the average trapping time diverges, and therefore the system needs infinite time to explore the whole configuration space. Do you recognise this temperature? </li>


<ol start="2">
<li><em> Eigenvalue density and concentration. </em> Let <math> \lambda_\alpha </math> be the eigenvalues of the matrix <math> M </math>. Show that the following identity holds:
<center>
<math>
\overline{|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|}=  \overline{\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]}, \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)
</math>
</center>
where <math>\rho_{N-1}(\lambda)</math> is the empirical eigenvalue distribution. It can be shown that if <math> M </math> is a GOE matrix, the distribution of the empirical distribution has a large deviation form (recall TD1) with speed <math> N^2 </math>, meaning that <math> P_N[\rho] = e^{-N^2 \, g[\rho]} </math> where now <math> g[\cdot] </math> is a functional. Using a saddle point argument, show that this implies
<center>
<math>
\overline{\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]}=\text{exp} \left[N \left( \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|\right)+ o(N) \right]
</math>
</center>
where <math> \rho_\infty(\lambda) </math> is the typical value of the eigenvalue density, which satisfies  <math> g[\rho_\infty]=0 </math>.
</li>
</ol>
</ol>
<br>


The trap model is an effective model for the dynamics in complex landscapes. In the model, the configuration space is described as a collection of <math> M=2^N </math> traps with depths (energies) distributed as <math> P_N (E) </math>, and the dynamics is a random walk between the traps: the system is trapped in one configuration for a random time distributed as <math> P(\tau) </math>, then it exits the traps and falls into another one randomly chosen.


<ol start="3">
<li><em> The semicircle and the complexity.</em> The eigenvalue density of GOE matrices is self-averaging, and it equals to
<center>
<math>
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \overline{\rho_N}(\lambda)= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }
</math>
</center>
<ul>
<li>Check this numerically: generate matrices for various values of <math> N </math>, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  </li>


<ol>
<li> <em> Aging .</em> Consider a trap-like dynamics from <math> t=0</math> to <math> t>0</math>. Compute the typical value of the maximal trapping time <math> \tau_{\text{max}}(t) </math> encountered in this time interval, an show that below the critical temperature <math> \tau_{\text{max}}(t) \sim t </math>. Who si this a condensation phenomenon, as the ones discussed in Problems 1? Why this corresponds to aging?


</li> <br>




<li> <em>.</em> </li>
<li> Combining all the results, show that the annealed complexity is
<center> <math>
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.
</math> </center>
The integral <math> I_p(\epsilon)</math> can be computed explicitly, and one finds:
<center> <math>
I_p(\epsilon)=
\begin{cases}
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon > \epsilon_{\text{th}}
\end{cases}
</math> </center>
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?
</li>
</ul>
</ol>
<br>


<ol start="4">
<li><em> The threshold and the stability.</em>
Sketch <math> \rho_\infty(\lambda+p \epsilon) </math> for different values of <math> \epsilon </math>; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density
<math>
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.
</math>
When are the critical points stable local minima? When are they saddles? Why the stationary points at <math> \epsilon= \epsilon_{\text{th}}</math> are called  <em> marginally stable </em>?
</li>
</ol>
</ol>
<br>


== Check out: key concepts ==


the system has spent almost all the time up to <math> t</math> in only one trap, the deepest one.
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.

Latest revision as of 15:54, 2 March 2025

Goal: Complete the characterisation of the energy landscape of the spherical -spin.
Techniques: saddle point, random matrix theory.


Problems

Problem 6: the Hessian at the stationary points, and random matrix theory

This is a continuation of problem 5. To get the complexity of the spherical -spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory.


  1. Gaussian Random matrices. Show that the matrix is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution where is a normalization. What is the value of ?



  1. Eigenvalue density and concentration. Let be the eigenvalues of the matrix . Show that the following identity holds:

    where is the empirical eigenvalue distribution. It can be shown that if is a GOE matrix, the distribution of the empirical distribution has a large deviation form (recall TD1) with speed , meaning that where now is a functional. Using a saddle point argument, show that this implies

    where is the typical value of the eigenvalue density, which satisfies .



  1. The semicircle and the complexity. The eigenvalue density of GOE matrices is self-averaging, and it equals to

    • Check this numerically: generate matrices for various values of , plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?
    • Combining all the results, show that the annealed complexity is

      The integral can be computed explicitly, and one finds:

      Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?


  1. The threshold and the stability. Sketch for different values of ; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density When are the critical points stable local minima? When are they saddles? Why the stationary points at are called marginally stable ?


Check out: key concepts

Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.