T-6: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
Line 29: Line 29:
where <math>\rho_{N-1}(\lambda)</math> is the empirical eigenvalue distribution. It can be shown that if <math> M </math> is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed <math> N^2 </math>, meaning that <math> P_N[\rho] = e^{-N^2 \, g[\rho]} </math> where now <math> g[\cdot] </math> is a functional. Using a saddle point argument, show that this implies  
where <math>\rho_{N-1}(\lambda)</math> is the empirical eigenvalue distribution. It can be shown that if <math> M </math> is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed <math> N^2 </math>, meaning that <math> P_N[\rho] = e^{-N^2 \, g[\rho]} </math> where now <math> g[\cdot] </math> is a functional. Using a saddle point argument, show that this implies  
<math display="block">
<math display="block">
\overline{\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]}=\text{exp} \left[N \left( \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|\right)+ o(N) \right]
\mathbb{E}[\text{exp} \left[(N-1) \left( \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]]=\text{exp} \left[N \left( \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|\right)+ o(N) \right]
</math>
</math>
where <math> \rho_\infty(\lambda) </math> is the typical value of the eigenvalue density, which satisfies  <math> g[\rho_\infty]=0 </math>.
where <math> \rho_\infty(\lambda) </math> is the typical value of the eigenvalue density, which satisfies  <math> g[\rho_\infty]=0 </math>.
Line 39: Line 39:
<ol start="3">
<ol start="3">
<li><em> The semicircle and the complexity.</em> The eigenvalue density of GOE matrices is self-averaging, and it equals to  
<li><em> The semicircle and the complexity.</em> The eigenvalue density of GOE matrices is self-averaging, and it equals to  
<center>
<math display="block">
<math>
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \overline{\rho_N}(\lambda)= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \overline{\rho_N}(\lambda)= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }
</math>
</math>
</center>
<ul>
<ul>
<li>Check this numerically: generate matrices for various values of <math> N </math>, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  </li>
<li>Check this numerically: generate matrices for various values of <math> N </math>, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  </li>
Line 51: Line 49:


<li> Combining all the results, show that the annealed complexity is
<li> Combining all the results, show that the annealed complexity is
<center> <math>
<math display="block">
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.
</math> </center>
</math>  
The integral <math>  I_p(\epsilon)</math> can be computed explicitly, and one finds:
The integral <math>  I_p(\epsilon)</math> can be computed explicitly, and one finds:
<center> <math>
<math display="block">
  I_p(\epsilon)=  
  I_p(\epsilon)=  
\begin{cases}
\begin{cases}
Line 61: Line 59:
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon > \epsilon_{\text{th}}
&\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon > \epsilon_{\text{th}}
\end{cases}
\end{cases}
</math> </center>
</math>  
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?
</li>
</li>

Revision as of 21:12, 8 February 2026

Goal: Complete the characterisation of the energy landscape of the spherical p-spin.
Techniques: saddle point, random matrix theory.


Problems

Problem 6: the Hessian at the stationary points, and random matrix theory

This is a continuation of problem 5. To get the complexity of the spherical p-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the Tutorial and Exercise 4 .


  1. Gaussian Random matrices. Show that the matrix M is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution PN(M)=ZN1exp(N4σ2TrM2) where ZN is a normalization. What is the value of σ2?



  1. Eigenvalue density and concentration. Let λα be the eigenvalues of the matrix M. Show that the following identity holds: 𝔼[|det(Mpϵ𝕀)|]=𝔼[exp[(N1)(dλρN1(λ)log|λpϵ|)]],ρN1(λ)=1N1α=1N1δ(λλα) where ρN1(λ) is the empirical eigenvalue distribution. It can be shown that if M is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed N2, meaning that PN[ρ]=eN2g[ρ] where now g[] is a functional. Using a saddle point argument, show that this implies 𝔼[exp[(N1)(dλρN1(λ)log|λpϵ|)]]=exp[N(dλρ(λ+pϵ)log|λ|)+o(N)] where ρ(λ) is the typical value of the eigenvalue density, which satisfies g[ρ]=0.



  1. The semicircle and the complexity. The eigenvalue density of GOE matrices is self-averaging, and it equals to limNρN(λ)=limNρN(λ)=ρ(λ)=12πσ24σ2λ2
    • Check this numerically: generate matrices for various values of N, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?
    • Combining all the results, show that the annealed complexity is Σa(ϵ)=12log[4e(p1)]ϵ22+Ip(ϵ),Ip(ϵ)=2πdx1(xϵϵth)2log|x|,ϵth=2p1p. The integral Ip(ϵ) can be computed explicitly, and one finds: Ip(ϵ)={ϵ2ϵth212ϵϵthϵ2ϵth21+log(ϵϵth+ϵ2ϵth21)log2ifϵϵthϵ2ϵth212log2ifϵ>ϵth Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?


  1. The threshold and the stability. Sketch ρ(λ+pϵ) for different values of ϵ; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density ϵth=2(p1)/p. When are the critical points stable local minima? When are they saddles? Why the stationary points at ϵ=ϵth are called marginally stable ?


Check out: key concepts

Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.