<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://www.lptms.universite-paris-saclay.fr//wikids/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ros</id>
	<title>Disordered Systems Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://www.lptms.universite-paris-saclay.fr//wikids/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ros"/>
	<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php/Special:Contributions/Ros"/>
	<updated>2026-04-11T23:08:56Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4302</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4302"/>
		<updated>2026-03-26T09:28:34Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Evaluation and exam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 from 2nd March  &lt;br /&gt;
* Room 107 in Jussieu campus, Tours 24-34 on 23 March  &#039;&#039;&#039;Attention: ROOM CHANGE LAST LECTURE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Prob.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra (not in exam!)&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] [[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] [[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;!--[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026, in room 105, Tours 24-25&#039;&#039;&#039;. It will be written, 3h long, from 2pm to 5pm. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4289</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4289"/>
		<updated>2026-03-16T20:34:45Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Check out: key concepts and exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig. 7 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; (see sketch). The model is random, since each trap is associated to a random number &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; that is called the mean trapping time. The dynamics is stochastic: it is a sequence of jumps between the traps, where the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;  a certain amount of time with mean value &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt;. This means that the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;. When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The mean trapping times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this Problem, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase: the system needs infinite time to explore the whole configuration space.&lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Condensation.&amp;lt;/em&amp;gt; Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Justify why the correlation function for this model can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)=\Pi(t, t_w), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=1 \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
 &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws (algebraic behaviour). &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated energy landscape, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4288</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4288"/>
		<updated>2026-03-16T20:32:00Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Where and When */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 from 2nd March  &lt;br /&gt;
* Room 107 in Jussieu campus, Tours 23-34 on 23 March  &#039;&#039;&#039;Attention: ROOM CHANGE LAST LECTURE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Prob.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra (not in exam!)&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] [[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] [[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;!--[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4287</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4287"/>
		<updated>2026-03-16T20:30:20Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Prob.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra (not in exam!)&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] [[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] [[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;!--[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4286</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4286"/>
		<updated>2026-03-16T20:29:43Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Prob.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra (not in exam!)&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] [[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] [[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P8_solutions.pdf&amp;diff=4285</id>
		<title>File:2025 P8 solutions.pdf</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P8_solutions.pdf&amp;diff=4285"/>
		<updated>2026-03-16T20:29:00Z</updated>

		<summary type="html">&lt;p&gt;Ros: Ros uploaded a new version of File:2025 P8 solutions.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P9_solutions.pdf&amp;diff=4284</id>
		<title>File:2025 P9 solutions.pdf</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P9_solutions.pdf&amp;diff=4284"/>
		<updated>2026-03-16T20:28:13Z</updated>

		<summary type="html">&lt;p&gt;Ros: Ros uploaded a new version of File:2025 P9 solutions.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4283</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4283"/>
		<updated>2026-03-16T20:27:31Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Prob.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4282</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4282"/>
		<updated>2026-03-16T20:26:53Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] [[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] [[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4281</id>
		<title>T-8</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4281"/>
		<updated>2026-03-15T20:51:47Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* A criterion for localization: vanishing decay rate */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal:&amp;lt;/strong&amp;gt;  the goal of this set of problems is to derive a criterion for localization on a peculiar lattice, the Bethe lattice. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt;  green functions, recursion relations, cavity method.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A criterion for localization: vanishing decay rate==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson model and return probability. &amp;lt;/strong&amp;gt; Consider a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;x &amp;lt;/math&amp;gt;. We call &amp;lt;math&amp;gt; |x \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; x &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H= W \sum_{x} V_x |x \rangle \langle x| - \sum_{&amp;lt;x, y&amp;gt;} t_{xy} \left(|x \rangle \langle y|+ |y \rangle \langle x| \right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(V)&amp;lt;/math&amp;gt;. Usually, the hopping amplitude &amp;lt;math&amp;gt; t_{xy}=t&amp;lt;/math&amp;gt; is chosen to be uniform. &lt;br /&gt;
&lt;br /&gt;
To see whether localization occurs, consider a quantum particle initialised on the site &amp;lt;math&amp;gt;x_0 &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When the system is not localized, the wave function of the particle spreads in time, and the return probability decays. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; We introduce two functions on the complex plane: the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{xy}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_x(z)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;z=E+ i \eta, \quad \quad \eta&amp;gt;0&amp;lt;/math&amp;gt; belongs to the upper half of the complex plane. The functions are defined by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{xy}(z)= \langle x| \frac{1}{z-H}| y \rangle , \quad \quad G_{x x}(z)= \langle x| \frac{1}{z-H}| x\rangle  = \frac{1}{z- V_x-\sigma_x(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
They are random functions, because the Hamiltonian contains randomness. The Green functions and the self-energies encode properties on the spectrum of the Hamiltonian&amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;, and they are related to the return probability, which can be written as (Laplace transform):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle =\lim_{\eta \to 0} \int_{-\infty}^{\infty} \frac{ dE}{2 \pi i}e^{-i t (E + i \eta)} G_{x_0 x_0}(z)=\lim_{\eta \to 0} \int_{-\infty+ i \eta}^{\infty+ i \eta}  \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- V_{x_0}-\sigma_{x_0}(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson&#039;s criterion for localization. &amp;lt;/strong&amp;gt; If the self-energy has a non-zero imaginary part (when &amp;lt;math&amp;gt; N \to \infty &amp;lt;/math&amp;gt;):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\sigma_{x_0}(z)= R_{x_0}(z)-i \Gamma_{x_0}(z),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
then one can show (How? [[Media:2025 Self energies, decay rates and all that.pdf| See here for some notes]] ) that &amp;lt;math&amp;gt;\mathcal{A}_{x_0}(t)&amp;lt;/math&amp;gt; decays exponentially,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t) \sim A(t)e^{-\gamma t}+ B(t), \quad \quad \quad \gamma= \Gamma_{x_0}(V_{x_0}) + O(T^4),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and the system is not localized: the probability to find it, at &amp;lt;math&amp;gt; t \gg 1&amp;lt;/math&amp;gt;, in the same configuration where it was at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt; decays fast. For localization to occur, it must be that the imaginary part of &amp;lt;math&amp;gt; \sigma{x_0}(E+ i\eta)&amp;lt;/math&amp;gt; goes to zero when &amp;lt;math&amp;gt; \eta \to 0&amp;lt;/math&amp;gt;. Given the randomness, this criterion should however be formulated probabilistically:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{\eta \to 0} \lim_{N \to \infty} \mathbb{P}\left(- \Im \sigma_{x_0}(E+i \eta)&amp;gt;0 \right)=0 \quad  \Longrightarrow	 \quad \text{Localization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Showing that at strong enough disorder this condition is satisfied is the core of Anderson’s 1958 work.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; A dynamical transition. &amp;lt;/strong&amp;gt; &lt;br /&gt;
 Notice that in this criterion, the probability plays the role of an order parameter (like the magnetization in ferromagnets, or the average overlap distribution in spin glasses), and the &amp;lt;ins&amp;gt; imaginary part&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; plays the role of a symmetry breaking field (like the magnetic field in the ferromagnet, or the coupling between replicas in spin glasses). However, the localization transition has nothing to do with equilibrium, i.e., it is not related to a change of structure of the Gibbs Boltzmann measure; rather, it is a dynamical transition (like depinning!). Pushing the analogy with equilibrium phase transitions, one can say that the localised phase corresponds to the disordered phase (the one in which symmetry is not broken, like the paramagnetic phase). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - For example, the local density of eigenvalues &amp;lt;math&amp;gt;\nu_{x, N}(E)&amp;lt;/math&amp;gt; for an Hamiltonian on a lattice of size &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; is indeed given by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\nu_{x,N}(E)=-\frac{1}{\pi}\lim_{\eta \to 0} \Im  G_{xx}(E+ i \eta) = \sum_{\alpha=1}^N |\langle E_\alpha| x\rangle|^2 \delta(E-E_\alpha),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; are the eigenvalues of the full Hamiltonian &amp;lt;math&amp;gt; H &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |E_\alpha \rangle &amp;lt;/math&amp;gt; the corresponding eigenstates. When the kinetic term &amp;lt;math&amp;gt;T &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They encode how much the energy levels &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; (that are the eigenvalues when &amp;lt;math&amp;gt; V=0&amp;lt;/math&amp;gt;) are shifted by the presence of the kinetic (hopping) term.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; Given a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt;, we call &amp;lt;math&amp;gt; |a \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; a &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
H= W \sum_{a} \epsilon_a |a \rangle \langle a| - \sum_{&amp;lt;a, b&amp;gt;} V_{ab} \left(|a \rangle \langle b|+ |b \rangle \langle a| \right)\equiv H_0 + V&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; \epsilon_a &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(\epsilon)&amp;lt;/math&amp;gt;.&lt;br /&gt;
We introduce the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{ab}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_a(z):&amp;lt;/math&amp;gt; these are functions of a complex variable belonging to the upper half of the complex plane, &lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
z=E+ i \eta, \quad \quad \eta&amp;gt;0&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
and are defined by [NOTA SU STILTJIES]&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
G_{ab}(z)= \langle a| \frac{1}{z-H}| b \rangle , \quad \quad G_{aa}(z)= \langle a| \frac{1}{z-H}| a\rangle  = \frac{1}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
When the kinetic term &amp;lt;math&amp;gt;V &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They are random functions, because the Hamiltonian contains randomness.&lt;br /&gt;
&amp;lt;strong&amp;gt; Self-energies and return probabilities. &amp;lt;/strong&amp;gt; The local self-energies encode some information on the system’s dynamics, and thus on whether localization occurs. Consider a quantum particle initialised on the site &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{A}_a(t)= \theta(t) \langle a| e^{-i t H} | a \rangle =\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 8: Bethe lattice, cavity method &amp;amp; recursion relations ===&lt;br /&gt;
[[File:BetheLattice.png|thumb|right|x160px|A Bethe lattice with &amp;lt;math&amp;gt; k=2 &amp;lt;/math&amp;gt;.]]&lt;br /&gt;
The Bethe lattice is a lattice with a regular tree structure: each node has a fixed number of neighbours &amp;lt;math&amp;gt; k+1&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt; is the branching number, and there are no loops (see sketch). In these problems we consider the Anderson model on such lattice. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Green functions identities. &amp;lt;/em&amp;gt; Consider an Hamiltonian split into two parts, &amp;lt;math&amp;gt; H= H_0 + H_1 &amp;lt;/math&amp;gt;. Show that the following general relation for the Green functions holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G=G^0+ G^0 H_1 G, \quad \quad G^0 =\frac{1}{z-H_0}, \quad \quad G =\frac{1}{z-H}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Cavity equations. &amp;lt;/em&amp;gt;We now apply this to a specific example: we consider a Bethe lattice, and choose one site 0 as the root. We then choose &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; to be the kinetic terms connecting the root to its &amp;lt;math&amp;gt; k+1 &amp;lt;/math&amp;gt; neighbors &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H_1= -\sum_{i=1}^{k+1} t_{0 a_i} \left( |a_i \rangle \langle 0|+ |0 \rangle \langle a_i|\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
For all the &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt; i=1, \cdots, k+1 &amp;lt;/math&amp;gt; we introduce the notation &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i} \equiv G^0_{a_i a_i}, \quad \quad \sigma^{\text{cav}}_{a_i} \equiv \sigma^0_{a_i},&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;  \sigma^0 &amp;lt;/math&amp;gt; is the self energy associated to &amp;lt;math&amp;gt; G^0 &amp;lt;/math&amp;gt;. Show that, due to the geometry of the lattice, with this choice of &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; the Hamiltonian &amp;lt;math&amp;gt; H_0 &amp;lt;/math&amp;gt; is decoupled and &amp;lt;math&amp;gt; G^{\text{cav}}_{a_i} &amp;lt;/math&amp;gt; is the local Green function that one would have obtained removing the root 0 from the lattice, i.e., creating a “cavity” (hence the suffix). Moreover, using the Green function identities iterated twice, show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{00}(z)= \frac{1}{z-W \, V_0 - \sum_{i=1}^{k+1} t^2_{0 a_i}G^{\text{cav}}_{a_i}(z)}  &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Iterating this argument, show that if &amp;lt;math&amp;gt; \partial a_i &amp;lt;/math&amp;gt; denotes the collection of “descendants&amp;quot; of  &amp;lt;math&amp;gt; a_i&amp;lt;/math&amp;gt;, i.e. sites that are nearest neighbours of &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; &amp;lt;em&amp;gt; except&amp;lt;/em&amp;gt; the root, then&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i}(z)=   \frac{1}{z-W \,V_{a_i} - \sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)}, \quad \quad \sigma^{\text{cav}}_{a_i}(z)=\sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)=\sum_{b \in \partial a_i} \frac{t^2_{a_i b}}{z- W \, V_b - \sigma^{\text{cav}}_{b}(z)}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Equations for the distribution. &amp;lt;/em&amp;gt;  Justify why the cavity functions appearing in the denominators in the last equations above are independent and identically distributed random variables, and therefore the cavity equations can be interpreted as self-consistent equations for the distribution of the cavity functions.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The “localized&amp;quot; solution. &amp;lt;/em&amp;gt; We set &amp;lt;math&amp;gt; z=E+ i \eta &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \sigma^{\text{cav}}_{a}(z)= R_a(z) -i \Gamma_a(z)&amp;lt;/math&amp;gt;. Show that the cavity equation for the self-energies is equivalent to the following pair of coupled equations:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Gamma_a= \sum_{b \in \partial a} t_{ab}^2 \frac{\Gamma_b + \eta}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}, \quad \quad R_a =  \sum_{b \in \partial a} t_{ab}^2 \frac{E- W \, V_b - R_b}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Justify why the solution corresponding to localization, &amp;lt;math&amp;gt; \Gamma_a=0 &amp;lt;/math&amp;gt;, is always a solution when &amp;lt;math&amp;gt; \eta \to 0 &amp;lt;/math&amp;gt;; moreover, in the localized phase when &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; is finite but small one has &amp;lt;math&amp;gt; \Gamma_a \sim O(\eta) &amp;lt;/math&amp;gt;. How can one argue that this solution has to be discarded, i.e. that delocalisation occurs? &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Green functions, self-energies, return probability amplitude, decay rates, trees and cavity method, the criterion for localization.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* The Anderson model was formulated by P. W. Anderson in 1958, in the paper &amp;lt;em&amp;gt; Absence of diffusion in certain random lattices&amp;lt;/em&amp;gt;, Phys. Rev. 109, 1492.&lt;br /&gt;
* The Bethe lattice version is discussed in the later paper Abou-Chacra, Thouless, Anderson. &amp;lt;em&amp;gt; A selfconsistent theory of localization&amp;lt;/em&amp;gt; . Journal of Physics C: Solid State Physics 6.10 (1973)&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P7_solutions_.pdf&amp;diff=4280</id>
		<title>File:2025 P7 solutions .pdf</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P7_solutions_.pdf&amp;diff=4280"/>
		<updated>2026-03-15T16:40:23Z</updated>

		<summary type="html">&lt;p&gt;Ros: Ros uploaded a new version of File:2025 P7 solutions .pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P666_solutions_.pdf&amp;diff=4279</id>
		<title>File:2025 P666 solutions .pdf</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:2025_P666_solutions_.pdf&amp;diff=4279"/>
		<updated>2026-03-15T16:39:26Z</updated>

		<summary type="html">&lt;p&gt;Ros: Ros uploaded a new version of File:2025 P666 solutions .pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4273</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4273"/>
		<updated>2026-03-15T16:08:54Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4272</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4272"/>
		<updated>2026-03-15T16:08:45Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;--&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt; --!&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4270</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4270"/>
		<updated>2026-03-15T16:07:22Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[Media:Exercise_16-17.pdf| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:Exercise_16-17.pdf&amp;diff=4269</id>
		<title>File:Exercise 16-17.pdf</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=File:Exercise_16-17.pdf&amp;diff=4269"/>
		<updated>2026-03-15T16:06:49Z</updated>

		<summary type="html">&lt;p&gt;Ros: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4268</id>
		<title>Sample questions</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4268"/>
		<updated>2026-03-15T16:03:47Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Theory Questions: Example 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Theory Questions: Example 1==&lt;br /&gt;
&lt;br /&gt;
In the fully connected depinning model discussed in the lectures, avalanches occur when an unstable site jumps to the next pinning well and redistributes stress to all other sites.&lt;br /&gt;
&lt;br /&gt;
Assume that the distribution of distances to instability is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P(x)=2 e^{-2x}, \qquad x&amp;gt;0 ,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and that the jump sizes are distributed as&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
g(\Delta)=e^{-\Delta}, \qquad \Delta&amp;gt;0 .&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which of the following statements is correct?&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;A.&#039;&#039;&#039; Avalanches are subcritical because the average kick is smaller than the average gap.&lt;br /&gt;
* &#039;&#039;&#039;B.&#039;&#039;&#039; Avalanches are critical because the average kick equals the average gap.&lt;br /&gt;
* &#039;&#039;&#039;C.&#039;&#039;&#039; Avalanches are supercritical because the average kick is larger than the average gap.&lt;br /&gt;
* &#039;&#039;&#039;D.&#039;&#039;&#039; The information provided is insufficient to determine the regime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory Questions: Example 2==&lt;br /&gt;
Consider the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model in the &amp;lt;math&amp;gt;T&amp;lt;T_c&amp;lt;/math&amp;gt; spin-glass phase, where &amp;lt;math&amp;gt;T_c&amp;lt;/math&amp;gt; is the temperature at which Replica Symmetry Breaking occurs. Consider the overlap &amp;lt;math&amp;gt;q(\sigma,\sigma&#039;)&amp;lt;/math&amp;gt; between two configurations extracted with the Boltzmann measure: is this a self-averaging quantity? Why?&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4267</id>
		<title>Sample questions</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4267"/>
		<updated>2026-03-15T16:02:11Z</updated>

		<summary type="html">&lt;p&gt;Ros: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Theory Questions: Example 1==&lt;br /&gt;
Consider the spherical p-spin model in the $T&amp;lt;T_c$ spin glass phase, where $T_c$ is the temperature where Replica Symmetry Breaking occurs. Consider the overlap $q(\sigma, \sigma&#039;)$ between two configurations extracted with the Boltzmann measure: is this a self-averaging quantity? Why?&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4266</id>
		<title>Sample questions</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4266"/>
		<updated>2026-03-15T16:01:28Z</updated>

		<summary type="html">&lt;p&gt;Ros: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Consider the spherical $p$-spin model in the $T&amp;lt;T_c$ spin glass phase, where $T_c$ is the temperature where Replica Symmetry Breaking occurs. Consider the overlap $q(\sigma, \sigma&#039;)$ between two configurations extracted with the Boltzmann measure: is this a self-averaging quantity? Why?&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4265</id>
		<title>Sample questions</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Sample_questions&amp;diff=4265"/>
		<updated>2026-03-15T16:00:36Z</updated>

		<summary type="html">&lt;p&gt;Ros: Created page with &amp;quot;XZXZXZ&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;XZXZXZ&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4264</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4264"/>
		<updated>2026-03-15T16:00:22Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Evaluation and exam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[XX| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see [[Sample questions|HERE]] for some examples. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4263</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4263"/>
		<updated>2026-03-15T15:35:23Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Problem 7: a simple model for aging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig. 7 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; (see sketch). The model is random, since each trap is associated to a random number &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; that is called the mean trapping time. The dynamics is stochastic: it is a sequence of jumps between the traps, where the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;  a certain amount of time with mean value &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt;. This means that the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;. When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The mean trapping times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this Problem, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase: the system needs infinite time to explore the whole configuration space.&lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Condensation.&amp;lt;/em&amp;gt; Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Justify why the correlation function for this model can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)=\Pi(t, t_w), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=1 \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
 &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws (algebraic behaviour). &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4262</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4262"/>
		<updated>2026-03-15T15:34:14Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Problem 7: a simple model for aging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig. 7 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes studied in &amp;lt;sup&amp;gt;[[#Notes|[1] ]]&amp;lt;/sup&amp;gt;. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; (see sketch). The model is random, since each trap is associated to a random number &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; that is called the mean trapping time. The dynamics is stochastic: it is a sequence of jumps between the traps, where the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;  a certain amount of time with mean value &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt;. This means that the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;. When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The mean trapping times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this Problem, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase: the system needs infinite time to explore the whole configuration space.&lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Condensation.&amp;lt;/em&amp;gt; Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Justify why the correlation function for this model can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)=\Pi(t, t_w), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=1 \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
 &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws (algebraic behaviour). &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4261</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4261"/>
		<updated>2026-03-15T15:29:01Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* A dynamical dictionary:  out-of-equilibrium, aging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig. 7 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes studied in &amp;lt;sup&amp;gt;[[#Notes|[1] ]]&amp;lt;/sup&amp;gt;. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; having random depths/energies (see sketch). The dynamics is a sequence of jumps between the traps: the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; an exponentially large time with average &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt; (the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;.). When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The average times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this exercise, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking and condensation.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase (the system needs infinite time to explore the whole configuration space). Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Assume now that the trap represent a collection of microscopic configurations having self overlap  &amp;lt;math&amp;gt;q_{EA}&amp;lt;/math&amp;gt;. Assume that the overlap between configurations of different traps is &amp;lt;math&amp;gt; q_0 &amp;lt;/math&amp;gt;. Justify why the correlation function can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)= q_{EA} \Pi(t, t_w)+ q_0 \left(1-\Pi(t, t_w)\right), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=q_0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=q_{EA} \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt; q_0=0&amp;lt;/math&amp;gt;, this behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
&amp;lt;code&amp;gt;Extra. &amp;lt;/code&amp;gt; &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4260</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4260"/>
		<updated>2026-03-15T15:28:25Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* A dynamical dictionary:  energy barriers, out-of-equilibrium, aging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig 7.2 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7.2: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes studied in &amp;lt;sup&amp;gt;[[#Notes|[1] ]]&amp;lt;/sup&amp;gt;. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; having random depths/energies (see sketch). The dynamics is a sequence of jumps between the traps: the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; an exponentially large time with average &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt; (the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;.). When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The average times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this exercise, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking and condensation.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase (the system needs infinite time to explore the whole configuration space). Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Assume now that the trap represent a collection of microscopic configurations having self overlap  &amp;lt;math&amp;gt;q_{EA}&amp;lt;/math&amp;gt;. Assume that the overlap between configurations of different traps is &amp;lt;math&amp;gt; q_0 &amp;lt;/math&amp;gt;. Justify why the correlation function can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)= q_{EA} \Pi(t, t_w)+ q_0 \left(1-\Pi(t, t_w)\right), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=q_0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=q_{EA} \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt; q_0=0&amp;lt;/math&amp;gt;, this behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
&amp;lt;code&amp;gt;Extra. &amp;lt;/code&amp;gt; &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4259</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4259"/>
		<updated>2026-03-15T15:27:57Z</updated>

		<summary type="html">&lt;p&gt;Ros: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  energy barriers, out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Noise and Langevin dynamics.&#039;&#039;&#039; In problems 5 and 6 we have characterized the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, and showed that it is made by plenty of stationary points where gradient descent can get stuck. In presence of noise, &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
the random terms kick the systems in random directions in configuration space, allowing to escape from stationary points.&lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature.  &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; When the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig 7.2 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7.2: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes studied in &amp;lt;sup&amp;gt;[[#Notes|[1] ]]&amp;lt;/sup&amp;gt;. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; having random depths/energies (see sketch). The dynamics is a sequence of jumps between the traps: the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; an exponentially large time with average &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt; (the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;.). When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The average times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this exercise, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking and condensation.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase (the system needs infinite time to explore the whole configuration space). Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Assume now that the trap represent a collection of microscopic configurations having self overlap  &amp;lt;math&amp;gt;q_{EA}&amp;lt;/math&amp;gt;. Assume that the overlap between configurations of different traps is &amp;lt;math&amp;gt; q_0 &amp;lt;/math&amp;gt;. Justify why the correlation function can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)= q_{EA} \Pi(t, t_w)+ q_0 \left(1-\Pi(t, t_w)\right), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=q_0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=q_{EA} \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt; q_0=0&amp;lt;/math&amp;gt;, this behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
&amp;lt;code&amp;gt;Extra. &amp;lt;/code&amp;gt; &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4258</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4258"/>
		<updated>2026-03-15T15:27:18Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;em&amp;gt;trapping&amp;lt;/em&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand, if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The statement above for gradient descent corresponds to the special case &amp;lt;math&amp;gt;T=0&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt; is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;em&amp;gt;never&amp;lt;/em&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates.  However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. So, when we take &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt; before taking the large time limit, we are unable to see equilibration and we have a sharp transition, which becomes a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Why exponential timescales?  When the noise in the Langevin dynamics is weak (temperature is small), the dynamics  gets stuck in local minima for very large time. This time depends crucially on the &amp;lt;em&amp;gt; energy barrier &amp;lt;/em&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N)&amp;gt; \tau_0 e^{-\beta \, \Delta E}\sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4257</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4257"/>
		<updated>2026-03-15T15:25:31Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;em&amp;gt;trapping&amp;lt;/em&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand, if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The statement above for gradient descent corresponds to the special case &amp;lt;math&amp;gt;T=0&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt; is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;em&amp;gt;never&amp;lt;/em&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates.  However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. So, when we take &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt; before taking the large time limit, we are unable to see equilibration and we have a sharp transition, which becomes a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4256</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4256"/>
		<updated>2026-03-15T15:23:55Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;em&amp;gt;trapping&amp;lt;/em&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand, if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The statement above for gradient descent corresponds to the special case &amp;lt;math&amp;gt;T=0&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt; is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;em&amp;gt;never&amp;lt;/em&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4255</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4255"/>
		<updated>2026-03-15T15:22:50Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;ins&amp;gt;trapping&amp;lt;/ins&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand, if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The statement above for gradient descent corresponds to the special case &amp;lt;math&amp;gt;T=0&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt; is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4254</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4254"/>
		<updated>2026-03-15T15:21:28Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;ins&amp;gt;trapping&amp;lt;/ins&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand, if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4253</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4253"/>
		<updated>2026-03-15T15:20:53Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;ins&amp;gt;trapping&amp;lt;/ins&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches in temperature and equilibration.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- {\nabla}_\perp E(\vec{\sigma})+ {\vec{\eta}}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;{\vec{\eta}}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and constant variance proportional to temperature. It represents the action of a thermal bath on the system. &lt;br /&gt;
This dynamical protocol is called a  &amp;lt;ins&amp;gt;quench &amp;lt;/ins&amp;gt;. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density at the temperature &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;, the same one controlling the strength of the noise. Equilibrating with the bath would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4252</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4252"/>
		<updated>2026-03-15T15:18:34Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if we do gradient descent, we get stuck in a local minimum and we exert a small perturbation to the configuration, gradient descent brings us back to the local minimum. These configurations are &amp;lt;ins&amp;gt;trapping&amp;lt;/ins&amp;gt;. If we try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, we expect that we will not be able to reach it easily, as we will be trapped by local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise, to be sent to zero with a protocol),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
This dynamical protocol is called a temperature quench. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density (that we can compute using Boltzmann). Equilibrating with the bath  would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4251</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4251"/>
		<updated>2026-03-15T15:16:50Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
This dynamical protocol is called a temperature quench. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density (that we can compute using Boltzmann). Equilibrating with the bath  would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig. 6 - Activated jump across an energy barrier.]]&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4250</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4250"/>
		<updated>2026-03-15T15:16:19Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Activated Jump.png|thumb|right|x160px|Fig 7 - Activated jump across an energy barrier.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
This dynamical protocol is called a temperature quench. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density (that we can compute using Boltzmann). Equilibrating with the bath  would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4249</id>
		<title>T-7</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-7&amp;diff=4249"/>
		<updated>2026-03-15T15:16:00Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* A dynamical dictionary:  energy barriers, out-of-equilibrium, aging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; The goal of these problems is to understand some features of glassy dynamics (power laws, aging) in a simplified single particle description, the so called trap model.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; extreme value statistics, asymptotic analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A dynamical dictionary:  energy barriers, out-of-equilibrium, aging ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Noise and Langevin dynamics.&#039;&#039;&#039; In problems 5 and 6 we have characterized the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, and showed that it is made by plenty of stationary points where gradient descent can get stuck. In presence of noise, &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
the random terms kick the systems in random directions in configuration space, allowing to escape from stationary points.&lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature.  &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; When the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Correlation Function.png|thumb|right|x160px|Fig 7.2 - Behaviour of the correlation function in a system displaying aging.]]&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibrating dynamics.&#039;&#039;&#039; A system evolving with thermal dynamics (e.g. Langevin dynamics) &amp;lt;ins&amp;gt; equilibrates dynamically &amp;lt;/ins&amp;gt; if there is a timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; beyond which the dynamical trajectories sample the configurations of the system &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with the frequency that is prescribed by the Gibbs Boltzmann measure, &amp;lt;math&amp;gt; \sim e^{-\beta E(\vec{\sigma})} &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; is the inverse temperature associated to the noise. At equilibrium, one-point functions in time, like the energy of the system, reach a stationary value (the equilibrium value predicted by thermodynamics at that temperature), while two-point functions like the correlation function &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w+ t, t_w)= \frac{1}{N} \sum_{i=1}^N \sigma_i(t_w) \sigma_i(t_w+t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
are &amp;lt;ins&amp;gt; time-translation invariant&amp;lt;/ins&amp;gt;, meaning that &amp;lt;math&amp;gt; C(t_w+ t, t_w) \sim c(t) &amp;lt;/math&amp;gt; is only a function of the difference between the two times, and does not depend on &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Out-of-equilibrium and aging.&#039;&#039;&#039; In some systems the equilibration timescale &amp;lt;math&amp;gt; \tau_{\text{eq}} &amp;lt;/math&amp;gt; is extremely large/diverging with some parameter of the model (like &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;), and for very large time-scales the dynamics is &amp;lt;ins&amp;gt;out-of-equilibrium &amp;lt;/ins&amp;gt;.  In glassy systems, out-of-equilibrium dynamics is often characterized by &amp;lt;ins&amp;gt;aging&amp;lt;/ins&amp;gt;: the relaxation timescale of a system (how slow the system evolves) depends on the age of the system itself (on how long the system has evolved so far). Aging can be seen in the behaviour of correlation function, see Fig 7.2: the timescale that the system needs to leave the plateau increases with the age of the system &amp;lt;math&amp;gt; t_w &amp;lt;/math&amp;gt;, meaning that the system is becoming more and more slow as it gets more and more old.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 7: a simple model for aging ===&lt;br /&gt;
&lt;br /&gt;
[[File:Trap.png|thumb|right|x160px|Fig 6.3 - Traps in the trap model.]]&lt;br /&gt;
The trap model is an abstract model for the dynamics in complex landscapes studied in &amp;lt;sup&amp;gt;[[#Notes|[1] ]]&amp;lt;/sup&amp;gt;. The configuration space is a collection of &amp;lt;math&amp;gt; M \gg 1 &amp;lt;/math&amp;gt; traps labeled by &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; having random depths/energies (see sketch). The dynamics is a sequence of jumps between the traps: the system spends in a trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt; an exponentially large time with average &amp;lt;math&amp;gt; \tau_\alpha&amp;lt;/math&amp;gt; (the probability to jump out of the trap in time &amp;lt;math&amp;gt; [t, t+dt]&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt; dt/\tau_\alpha &amp;lt;/math&amp;gt;.). When the system exits the trap, it jumps into another one randomly chosen among the &amp;lt;math&amp;gt; M&amp;lt;/math&amp;gt;. The average times are distributed as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\mu(\tau)= \frac{\mu \tau_0^\mu}{\tau^{1+\mu}} \quad \quad \tau \geq \tau_0 &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is a parameter.  In this exercise, we aim at understanding the main features of this dynamics. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Ergodicity breaking and condensation.&amp;lt;/em&amp;gt; Compute the average trapping time (averaging between the traps) and show that there is a critical value of &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; below which it diverges, signalling a non-ergodic phase (the system needs infinite time to explore the whole configuration space). Consider a dynamics running from time &amp;lt;math&amp;gt;t_w&amp;lt;/math&amp;gt; to some later time &amp;lt;math&amp;gt; t_w+ t&amp;lt;/math&amp;gt;: compute the typical value of the maximal trapping time &amp;lt;math&amp;gt; \tau_{\text{max}}(t) &amp;lt;/math&amp;gt; encountered in this time interval, assuming that the system has spent exactly a time &amp;lt;math&amp;gt; \tau_\alpha &amp;lt;/math&amp;gt; in each visited trap &amp;lt;math&amp;gt; \alpha &amp;lt;/math&amp;gt;. Show that in the non-ergodic phase &amp;lt;math&amp;gt; \tau_{\text{max}}(t) \sim t &amp;lt;/math&amp;gt;. Why is this interpretable as a condensation phenomenon? &lt;br /&gt;
&amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt;Aging and weak ergodicity breaking.&amp;lt;/em&amp;gt; Assume now that the trap represent a collection of microscopic configurations having self overlap  &amp;lt;math&amp;gt;q_{EA}&amp;lt;/math&amp;gt;. Assume that the overlap between configurations of different traps is &amp;lt;math&amp;gt; q_0 &amp;lt;/math&amp;gt;. Justify why the correlation function can be written as &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
C(t_w + t, t_w)= q_{EA} \Pi(t, t_w)+ q_0 \left(1-\Pi(t, t_w)\right), \quad \quad   \Pi(t, t_w)= \text{probability that systems has not jumped in }[t_w, t_w+t].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
In the non-ergodic regime, one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Pi(t, t_w)= \frac{\sin (\pi \mu)}{\pi}\int_{\frac{t}{t+ t_w}}^1 du (1-u)^{\mu-1}u^{-\mu}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why is this an indication of aging? Show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} C(t_w + t, t_w)=q_0 \quad \text{ for finite }t_w, \quad \quad \lim_{t_w \to \infty} C(t_w + t, t_w)=q_{EA} \quad \text{ for finite }t&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When &amp;lt;math&amp;gt; q_0=0&amp;lt;/math&amp;gt;, this behaviour is called &amp;quot;weak ergodicity breaking&amp;quot;. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
&amp;lt;code&amp;gt;Extra. &amp;lt;/code&amp;gt; &amp;lt;em&amp;gt;Power laws.&amp;lt;/em&amp;gt; Study the asymptotic behavior of the correlation function for &amp;lt;math&amp;gt; t \ll t_w &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; t \gg t_w &amp;lt;/math&amp;gt; and show that the dynamics is slow, characterized by power laws. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!----&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== Problem 7.2: Motivating the model: from landscapes to traps ===&lt;br /&gt;
&amp;lt;!--and spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model. While for the &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we think about Langevin dynamics, for the REM we consider Monte Carlo dynamics: at each time step the system in a given configuration &amp;lt;math&amp;gt; \vec{\sigma} &amp;lt;/math&amp;gt; with energy &amp;lt;math&amp;gt; E_1 &amp;lt;/math&amp;gt; tries to transition to another configuration that differs with respect to the previous one by a single spin flip; let the energy of this second configuration be &amp;lt;math&amp;gt; E_2 &amp;lt;/math&amp;gt;. The transition occurs with probability one if &amp;lt;math&amp;gt; E_2 &amp;lt;E_1 &amp;lt;/math&amp;gt;, and with probability &amp;lt;math&amp;gt; e^{-\beta (E_2- E_1)}&amp;lt;/math&amp;gt;  otherwise.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: distribution of depths of traps.&amp;lt;/em&amp;gt; In the REM, the energy levels are independent Gaussian variables. In Lecture 1, we have shown that the Ground State &amp;lt;math&amp;gt; E_{\min} &amp;lt;/math&amp;gt; has the statistics of &amp;lt;math&amp;gt; E_{\min }=E_{\min }^{\rm typ}+ \frac{1}{\sqrt{2 \log 2}}z &amp;lt;/math&amp;gt;, with  &amp;lt;math&amp;gt; z &amp;lt;/math&amp;gt; Gumbel. The distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt; of the smallest energies values &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; among the &amp;lt;math&amp;gt; M=2^N &amp;lt;/math&amp;gt; can be assumed to be the same. Show that:&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_N^{\text{extrm}}(E) \approx C_N \text{exp}\left[ \sqrt{2\log 2} E  \right], \quad \quad E&amp;lt;0, \quad \quad C_N \text{ normalization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
(Hint: approximate the Gumbel distribution for small argument).  &amp;lt;/li&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; REM: trapping times.&amp;lt;/em&amp;gt; The Arrhenius law states that the time needed for the system to escape from a trap of energy density &amp;lt;math&amp;gt; \epsilon&amp;lt;0 &amp;lt;/math&amp;gt; and reach a configuration of zero energy density is &amp;lt;math&amp;gt; \tau \sim e^{-\beta N \epsilon} &amp;lt;/math&amp;gt;. This is a trapping time. Given the energy distribution &amp;lt;math&amp;gt; P_N^{\text{extrm}}(E) &amp;lt;/math&amp;gt;, determine the distribution of trapping times &amp;lt;math&amp;gt; P_\mu(\tau) &amp;lt;/math&amp;gt;: what plays the role of &amp;lt;math&amp;gt; \mu&amp;lt;/math&amp;gt;? Is the non-ergodic transition in the TRAP model consistent with what we know about the REM? &amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Extra: p-spin and the “trap” picture.&amp;lt;/em&amp;gt; In Problems 6, we have seen that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin is characterized by the threshold energy, below which plenty of minima appear. Explain why the trap model corresponds to the following picture for the dynamics: the system is trapped into minima below the threshold for exponentially large times, and then jumps from minimum to minimum passing through the threshold energy. &lt;br /&gt;
 &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts and exercises ==&lt;br /&gt;
&lt;br /&gt;
Key concepts: aging, activation, time-translation invariance, out-of equilibrium dynamics, power laws, decorrelation, condensation, extreme values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;code&amp;gt;Exercise 13 &amp;lt;/code&amp;gt;, you will see in which sense the trap model is a good effective model to describe a dynamics exploring a complicated  energy landscape with many metastable states, focusing on the REM landscape as an example.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* Bouchaud. Weak ergodicity breaking and aging in disordered systems [https://hal.science/jpa-00246652/document]&lt;br /&gt;
* Biroli. A crash course on aging [https://arxiv.org/abs/cond-mat/0504681]&lt;br /&gt;
* Kurchan. Six out-of-equilibrium lectures [https://arxiv.org/abs/0901.1271]&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4248</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4248"/>
		<updated>2026-03-15T15:15:25Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
This dynamical protocol is called a temperature quench. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density (that we can compute using Boltzmann). Equilibrating with the bath  would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;ins&amp;gt;dynamical transition temperature&amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;ins&amp;gt;extensive&amp;lt;/ins&amp;gt; energy barriers. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Activation and Arrhenius law.&#039;&#039;&#039; Indeed, when the noise in the Langevin dynamics is weak (temperature is small), the dynamics does not get stuck in local minima forever, but it does for very large time. This time depends crucially on the &amp;lt;ins&amp;gt; energy barrier &amp;lt;/ins&amp;gt; which separate the minimum from the other configurations (see Fig 6.1). The &amp;lt;ins&amp;gt;Arrhenius law&amp;lt;/ins&amp;gt; states that the typical timescale &amp;lt;math&amp;gt; \tau&amp;lt;/math&amp;gt; required to escape from a local minimum through a barrier of height &amp;lt;math&amp;gt; \Delta E &amp;lt;/math&amp;gt; with thermal dynamics with inverse temperature &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; scales as &amp;lt;math&amp;gt;\tau \sim \tau_0 e^{-\beta \, \Delta E} &amp;lt;/math&amp;gt;. Since in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we have &amp;lt;math&amp;gt; \Delta E \sim N \;  \Delta \epsilon &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt; \tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N} &amp;lt;/math&amp;gt;. A dynamics made of jumps from minimum to minimum through the crossing of energy barriers is called &amp;lt;ins&amp;gt; activated dynamics &amp;lt;/ins&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4247</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4247"/>
		<updated>2026-03-15T15:11:40Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
This dynamical protocol is called a temperature quench. The question we can ask is: does the system equilibrate with the bath under this dynamics? If yes, we should see that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} = \epsilon_{\rm eq}(T),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where  &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;/math&amp;gt; is the equilibrium energy density (that we can compute using Boltzmann). Equilibrating with the bath  would indeed imply that at large time the system visits uniformly the equilibrium energy shell. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Dynamical transition.&#039;&#039;&#039; Now, in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin we know that if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, the energy shell has many stationary points, but they are all unstable saddles and do not trap the dynamics. We expect that this energy shell is relatively easy to explore dynamically, and that equilibration takes place. On the other hand,  if &amp;lt;math&amp;gt;\epsilon_{\rm eq}(T)&amp;lt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;, in the equilibrium energy shell and at higher energy, there are exponentially many local minima that trap the dynamics, and we expect that reaching equilibrium configurations will be difficult. This tells us that there exists a critical &amp;lt;math&amp;gt;T_d&amp;lt;/math&amp;gt;, defined by &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\epsilon_{\rm eq}(T_d)=\epsilon_{\rm th},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
such that for &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{E(\vec{\sigma}(t))}{N} \neq \epsilon_{\rm eq}(T).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is called the &amp;lt;emph&amp;gt;dynamical transition temperature&amp;lt;/emph&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Equilibration timescales.&#039;&#039;&#039; Does it mean that when &amp;lt;math&amp;gt;T&amp;lt;T_d&amp;lt;/math&amp;gt;, the system  &amp;lt;emph&amp;gt;never&amp;lt;/emph&amp;gt; equilibrates? This is true only in the limit &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;N &amp;lt;/math&amp;gt; is finite, there is a timescale &amp;lt;math&amp;gt;\tau_{\rm eq}(T, N)&amp;lt;/math&amp;gt; beyond which the system equilibrates. So, the dynamical transition is a transition only for &amp;lt;math&amp;gt;N \to \infty&amp;lt;/math&amp;gt;, and a crossover for finite &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;. However, this equilibration timescale &lt;br /&gt;
in the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin scales as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\tau_{\rm eq}(T&amp;lt; T_d, N) \sim e^{N}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This is again due to the presence of many local minima/ metastable states, that are separated by &amp;lt;emph&amp;gt;extensive&amp;lt;/emph&amp;gt; energy barriers. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4246</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4246"/>
		<updated>2026-03-15T14:58:13Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration with Langevin dynamics (gradient descent + noise):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_\perp E(\vec{\sigma})+ \vec{\eta}_\perp(t), \quad \quad \langle \eta_i(t) \eta_j(t&#039;)\rangle= 2 T \delta_{ij} \delta(t-t&#039;)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
In Langevin dynamics, &amp;lt;math&amp;gt;\vec{\eta}_\perp(t)&amp;lt;/math&amp;gt; a Gaussian vector at each time &amp;lt;math&amp;gt; t &amp;lt;/math&amp;gt;, uncorrelated from the vectors at other times &amp;lt;math&amp;gt; t&#039; \neq t &amp;lt;/math&amp;gt;,  with zero average and some constant variance proportional to temperature. They represent effectively the action of a bath on the system. &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Optimization by gradient descent.&#039;&#039;&#039; Suppose that we are interested in finding the configurations of minimal energy, starting from an arbitrary configuration &amp;lt;math&amp;gt;\vec{\sigma}_0&amp;lt;/math&amp;gt;: we can implement a dynamics in which we progressively update the configuration moving towards lower and lower values of the energy, hoping to eventually converge to the ground state(s). The simplest dynamics of this sort is &amp;lt;ins&amp;gt;gradient descent&amp;lt;/ins&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_{\perp} E(\vec{\sigma}) &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;\nabla_{\perp} E(\vec{\sigma})&amp;lt;/math&amp;gt; is the gradient of the landscape restricted to the sphere. The dynamics stops when it reaches a  &amp;lt;ins&amp;gt; stationary point &amp;lt;/ins&amp;gt;, a configuration where &amp;lt;math&amp;gt;  \nabla_\perp E(\vec{\sigma})=0&amp;lt;/math&amp;gt;. If the landscape has a convex structure, this will be the ground state; if the energy landscape is very non-convex like in glasses, the end point of this algorithm will be a local minimum at energies much higher than the ground state (see sketch).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Stationary points and complexity.&#039;&#039;&#039; To guess where gradient descent dynamics (or &amp;lt;ins&amp;gt; Langevin dynamics &amp;lt;/ins&amp;gt;) are expected to converge, it is useful to understand the distribution of the stationary points, i.e. the number &amp;lt;math&amp;gt; \mathcal{N}(\epsilon)&amp;lt;/math&amp;gt; of such configuration having a given energy density &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;. In fully-connected models, this quantity has an exponential scaling, &amp;lt;math&amp;gt; \mathcal{N}(\epsilon) \sim \text{exp}\left(N \Sigma(\epsilon) \right)&amp;lt;/math&amp;gt;, where  &amp;lt;math&amp;gt;  \Sigma(\epsilon)&amp;lt;/math&amp;gt; is the landscape’s &amp;lt;ins&amp;gt;complexity&amp;lt;/ins&amp;gt;. &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. Stationary points can be stable (local minima), or unstable (saddles or local maxima): their stability is encoded in the spectrum of the &amp;lt;ins&amp;gt; Hessian matrix &amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt;{\nabla}_\perp^2 E(\vec{\sigma})&amp;lt;/math&amp;gt;: when all the eigenvalues of the Hessian are positive, the point is a local minimum (and a saddle otherwise).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4245</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4245"/>
		<updated>2026-03-15T14:56:18Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Back to dynamics: quenches, and dynamical transitions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration following Langevin dynamics &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Optimization by gradient descent.&#039;&#039;&#039; Suppose that we are interested in finding the configurations of minimal energy, starting from an arbitrary configuration &amp;lt;math&amp;gt;\vec{\sigma}_0&amp;lt;/math&amp;gt;: we can implement a dynamics in which we progressively update the configuration moving towards lower and lower values of the energy, hoping to eventually converge to the ground state(s). The simplest dynamics of this sort is &amp;lt;ins&amp;gt;gradient descent&amp;lt;/ins&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_{\perp} E(\vec{\sigma}) &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;\nabla_{\perp} E(\vec{\sigma})&amp;lt;/math&amp;gt; is the gradient of the landscape restricted to the sphere. The dynamics stops when it reaches a  &amp;lt;ins&amp;gt; stationary point &amp;lt;/ins&amp;gt;, a configuration where &amp;lt;math&amp;gt;  \nabla_\perp E(\vec{\sigma})=0&amp;lt;/math&amp;gt;. If the landscape has a convex structure, this will be the ground state; if the energy landscape is very non-convex like in glasses, the end point of this algorithm will be a local minimum at energies much higher than the ground state (see sketch).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Stationary points and complexity.&#039;&#039;&#039; To guess where gradient descent dynamics (or &amp;lt;ins&amp;gt; Langevin dynamics &amp;lt;/ins&amp;gt;) are expected to converge, it is useful to understand the distribution of the stationary points, i.e. the number &amp;lt;math&amp;gt; \mathcal{N}(\epsilon)&amp;lt;/math&amp;gt; of such configuration having a given energy density &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;. In fully-connected models, this quantity has an exponential scaling, &amp;lt;math&amp;gt; \mathcal{N}(\epsilon) \sim \text{exp}\left(N \Sigma(\epsilon) \right)&amp;lt;/math&amp;gt;, where  &amp;lt;math&amp;gt;  \Sigma(\epsilon)&amp;lt;/math&amp;gt; is the landscape’s &amp;lt;ins&amp;gt;complexity&amp;lt;/ins&amp;gt;. &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. Stationary points can be stable (local minima), or unstable (saddles or local maxima): their stability is encoded in the spectrum of the &amp;lt;ins&amp;gt; Hessian matrix &amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt;{\nabla}_\perp^2 E(\vec{\sigma})&amp;lt;/math&amp;gt;: when all the eigenvalues of the Hessian are positive, the point is a local minimum (and a saddle otherwise).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4244</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4244"/>
		<updated>2026-03-15T14:55:55Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Dynamics, optimization, trapping local minima */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Back to dynamics: quenches, and dynamical transitions ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
Through Problems 5 and 6, we have shown that the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model has exponentially many stationary points , and that there is a transition at the energy density  &amp;lt;math&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt;: for &amp;lt;math&amp;gt;\epsilon&amp;gt;\epsilon_{\rm th}&amp;lt;/math&amp;gt; the stationary points are saddles, for &amp;lt;math&amp;gt;\epsilon\leq \epsilon_{\rm th}&amp;lt;/math&amp;gt; they are local minima. Let us try to deduce something on the systems&#039;s dynamics out of this.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Gradient descent dynamics.&#039;&#039;&#039; The local minima are dynamically stable: if I do gradient descent, I get stuck in a local minimum and I exert a small perturbation to the configuration, gradient descent brings me back to the local minimum. In this sense, these configurations are trapping. Therefore, if I try to optimize the landscape, i.e. to reach the ground state, with gradient descent dynamics, I expect that I will not be able to reach the ground state easily, as I will be trapped by these local minima. In fact, for the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model it can be shown that starting the gradient descent dynamics from random initial conditions and evolving the configuration with gradient descent (possibly with infinitesimal noise),&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{t \to \infty} \lim_{N \to \infty} \frac{ E(\vec{\sigma}(t))}{N} = \epsilon_{\rm th} \neq \epsilon_{\rm gs}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The system gets stuck at the energy density level where local minima start to appear, and does not reach the deeper local minima. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Quenches.&#039;&#039;&#039; We can generalize this protocol to higher &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;: we extract randomly the initial condition of the dynamics, and then we evolve the configuration following Langevin dynamics &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Rugged landscapes.&#039;&#039;&#039; Consider the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model: &amp;lt;math&amp;gt;E(\vec{\sigma})&amp;lt;/math&amp;gt; is an &amp;lt;ins&amp;gt; energy landscape &amp;lt;/ins&amp;gt;. It is a random function on configuration space (the surface &amp;lt;math&amp;gt; \mathcal{S}_N &amp;lt;/math&amp;gt; of the sphere). This landscape has its global minimum(a) at the ground state configuration(s): the energy density of the ground state(s) can be obtained studying the partition function &amp;lt;math&amp;gt; Z &amp;lt;/math&amp;gt; in the limit &amp;lt;math&amp;gt; \beta \to \infty &amp;lt;/math&amp;gt;. Besides the ground state(s), the energy landscape can have other local minima; fully-connected models of glasses are characterized by the fact that there are plenty of these local minima: the energy landscape is &amp;lt;ins&amp;gt; rugged&amp;lt;/ins&amp;gt;, see the sketch. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Optimization by gradient descent.&#039;&#039;&#039; Suppose that we are interested in finding the configurations of minimal energy, starting from an arbitrary configuration &amp;lt;math&amp;gt;\vec{\sigma}_0&amp;lt;/math&amp;gt;: we can implement a dynamics in which we progressively update the configuration moving towards lower and lower values of the energy, hoping to eventually converge to the ground state(s). The simplest dynamics of this sort is &amp;lt;ins&amp;gt;gradient descent&amp;lt;/ins&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_{\perp} E(\vec{\sigma}) &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;\nabla_{\perp} E(\vec{\sigma})&amp;lt;/math&amp;gt; is the gradient of the landscape restricted to the sphere. The dynamics stops when it reaches a  &amp;lt;ins&amp;gt; stationary point &amp;lt;/ins&amp;gt;, a configuration where &amp;lt;math&amp;gt;  \nabla_\perp E(\vec{\sigma})=0&amp;lt;/math&amp;gt;. If the landscape has a convex structure, this will be the ground state; if the energy landscape is very non-convex like in glasses, the end point of this algorithm will be a local minimum at energies much higher than the ground state (see sketch).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Stationary points and complexity.&#039;&#039;&#039; To guess where gradient descent dynamics (or &amp;lt;ins&amp;gt; Langevin dynamics &amp;lt;/ins&amp;gt;) are expected to converge, it is useful to understand the distribution of the stationary points, i.e. the number &amp;lt;math&amp;gt; \mathcal{N}(\epsilon)&amp;lt;/math&amp;gt; of such configuration having a given energy density &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;. In fully-connected models, this quantity has an exponential scaling, &amp;lt;math&amp;gt; \mathcal{N}(\epsilon) \sim \text{exp}\left(N \Sigma(\epsilon) \right)&amp;lt;/math&amp;gt;, where  &amp;lt;math&amp;gt;  \Sigma(\epsilon)&amp;lt;/math&amp;gt; is the landscape’s &amp;lt;ins&amp;gt;complexity&amp;lt;/ins&amp;gt;. &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. Stationary points can be stable (local minima), or unstable (saddles or local maxima): their stability is encoded in the spectrum of the &amp;lt;ins&amp;gt; Hessian matrix &amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt;{\nabla}_\perp^2 E(\vec{\sigma})&amp;lt;/math&amp;gt;: when all the eigenvalues of the Hessian are positive, the point is a local minimum (and a saddle otherwise).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4243</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4243"/>
		<updated>2026-03-15T14:44:42Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dynamics, optimization, trapping local minima ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
So far we have discussed the equilibrium properties of disordered systems, that are encoded in their partition function/free energy. When a system (following Langevin, Monte Carlo dynamics) equilibrates at sufficiently large times, its long-time properties are captured by these equilibrium calculations. In glassy systems the equilibration timescales are extremely large: for very large timescales the system  does not visit equilibrium configurations, but rather metastable states.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Rugged landscapes.&#039;&#039;&#039; Consider the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model: &amp;lt;math&amp;gt;E(\vec{\sigma})&amp;lt;/math&amp;gt; is an &amp;lt;ins&amp;gt; energy landscape &amp;lt;/ins&amp;gt;. It is a random function on configuration space (the surface &amp;lt;math&amp;gt; \mathcal{S}_N &amp;lt;/math&amp;gt; of the sphere). This landscape has its global minimum(a) at the ground state configuration(s): the energy density of the ground state(s) can be obtained studying the partition function &amp;lt;math&amp;gt; Z &amp;lt;/math&amp;gt; in the limit &amp;lt;math&amp;gt; \beta \to \infty &amp;lt;/math&amp;gt;. Besides the ground state(s), the energy landscape can have other local minima; fully-connected models of glasses are characterized by the fact that there are plenty of these local minima: the energy landscape is &amp;lt;ins&amp;gt; rugged&amp;lt;/ins&amp;gt;, see the sketch. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Optimization by gradient descent.&#039;&#039;&#039; Suppose that we are interested in finding the configurations of minimal energy, starting from an arbitrary configuration &amp;lt;math&amp;gt;\vec{\sigma}_0&amp;lt;/math&amp;gt;: we can implement a dynamics in which we progressively update the configuration moving towards lower and lower values of the energy, hoping to eventually converge to the ground state(s). The simplest dynamics of this sort is &amp;lt;ins&amp;gt;gradient descent&amp;lt;/ins&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_{\perp} E(\vec{\sigma}) &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;\nabla_{\perp} E(\vec{\sigma})&amp;lt;/math&amp;gt; is the gradient of the landscape restricted to the sphere. The dynamics stops when it reaches a  &amp;lt;ins&amp;gt; stationary point &amp;lt;/ins&amp;gt;, a configuration where &amp;lt;math&amp;gt;  \nabla_\perp E(\vec{\sigma})=0&amp;lt;/math&amp;gt;. If the landscape has a convex structure, this will be the ground state; if the energy landscape is very non-convex like in glasses, the end point of this algorithm will be a local minimum at energies much higher than the ground state (see sketch).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Stationary points and complexity.&#039;&#039;&#039; To guess where gradient descent dynamics (or &amp;lt;ins&amp;gt; Langevin dynamics &amp;lt;/ins&amp;gt;) are expected to converge, it is useful to understand the distribution of the stationary points, i.e. the number &amp;lt;math&amp;gt; \mathcal{N}(\epsilon)&amp;lt;/math&amp;gt; of such configuration having a given energy density &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;. In fully-connected models, this quantity has an exponential scaling, &amp;lt;math&amp;gt; \mathcal{N}(\epsilon) \sim \text{exp}\left(N \Sigma(\epsilon) \right)&amp;lt;/math&amp;gt;, where  &amp;lt;math&amp;gt;  \Sigma(\epsilon)&amp;lt;/math&amp;gt; is the landscape’s &amp;lt;ins&amp;gt;complexity&amp;lt;/ins&amp;gt;. &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. Stationary points can be stable (local minima), or unstable (saddles or local maxima): their stability is encoded in the spectrum of the &amp;lt;ins&amp;gt; Hessian matrix &amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt;{\nabla}_\perp^2 E(\vec{\sigma})&amp;lt;/math&amp;gt;: when all the eigenvalues of the Hessian are positive, the point is a local minimum (and a saddle otherwise).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4242</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4242"/>
		<updated>2026-03-15T14:44:29Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Dynamics, optimization, trapping local minima ==&lt;br /&gt;
[[File:Landscapes-GDD.png|thumb|right|x200px|Non-rugged vs rugged energy landscapes.]]&lt;br /&gt;
&lt;br /&gt;
So far we have discussed the equilibrium properties of disordered systems, that are encoded in their partition function/free energy. When a system (following Langevin, Monte Carlo dynamics) equilibrates at sufficiently large times, its long-time properties are captured by these equilibrium calculations. In glassy systems the equilibration timescales are extremely large: for very large timescales the system  does not visit equilibrium configurations, but rather metastable states.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Rugged landscapes.&#039;&#039;&#039; Consider the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin model: &amp;lt;math&amp;gt;E(\vec{\sigma})&amp;lt;/math&amp;gt; is an &amp;lt;ins&amp;gt; energy landscape &amp;lt;/ins&amp;gt;. It is a random function on configuration space (the surface &amp;lt;math&amp;gt; \mathcal{S}_N &amp;lt;/math&amp;gt; of the sphere). This landscape has its global minimum(a) at the ground state configuration(s): the energy density of the ground state(s) can be obtained studying the partition function &amp;lt;math&amp;gt; Z &amp;lt;/math&amp;gt; in the limit &amp;lt;math&amp;gt; \beta \to \infty &amp;lt;/math&amp;gt;. Besides the ground state(s), the energy landscape can have other local minima; fully-connected models of glasses are characterized by the fact that there are plenty of these local minima: the energy landscape is &amp;lt;ins&amp;gt; rugged&amp;lt;/ins&amp;gt;, see the sketch. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Optimization by gradient descent.&#039;&#039;&#039; Suppose that we are interested in finding the configurations of minimal energy, starting from an arbitrary configuration &amp;lt;math&amp;gt;\vec{\sigma}_0&amp;lt;/math&amp;gt;: we can implement a dynamics in which we progressively update the configuration moving towards lower and lower values of the energy, hoping to eventually converge to the ground state(s). The simplest dynamics of this sort is &amp;lt;ins&amp;gt;gradient descent&amp;lt;/ins&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{d \vec{\sigma}(t)}{dt}=- \nabla_{\perp} E(\vec{\sigma}) &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;\nabla_{\perp} E(\vec{\sigma})&amp;lt;/math&amp;gt; is the gradient of the landscape restricted to the sphere. The dynamics stops when it reaches a  &amp;lt;ins&amp;gt; stationary point &amp;lt;/ins&amp;gt;, a configuration where &amp;lt;math&amp;gt;  \nabla_\perp E(\vec{\sigma})=0&amp;lt;/math&amp;gt;. If the landscape has a convex structure, this will be the ground state; if the energy landscape is very non-convex like in glasses, the end point of this algorithm will be a local minimum at energies much higher than the ground state (see sketch).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &#039;&#039;&#039;Stationary points and complexity.&#039;&#039;&#039; To guess where gradient descent dynamics (or &amp;lt;ins&amp;gt; Langevin dynamics &amp;lt;/ins&amp;gt;) are expected to converge, it is useful to understand the distribution of the stationary points, i.e. the number &amp;lt;math&amp;gt; \mathcal{N}(\epsilon)&amp;lt;/math&amp;gt; of such configuration having a given energy density &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;. In fully-connected models, this quantity has an exponential scaling, &amp;lt;math&amp;gt; \mathcal{N}(\epsilon) \sim \text{exp}\left(N \Sigma(\epsilon) \right)&amp;lt;/math&amp;gt;, where  &amp;lt;math&amp;gt;  \Sigma(\epsilon)&amp;lt;/math&amp;gt; is the landscape’s &amp;lt;ins&amp;gt;complexity&amp;lt;/ins&amp;gt;. &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;. Stationary points can be stable (local minima), or unstable (saddles or local maxima): their stability is encoded in the spectrum of the &amp;lt;ins&amp;gt; Hessian matrix &amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt;{\nabla}_\perp^2 E(\vec{\sigma})&amp;lt;/math&amp;gt;: when all the eigenvalues of the Hessian are positive, the point is a local minimum (and a saddle otherwise).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - This quantity looks similar to the entropy &amp;lt;math&amp;gt; S(\epsilon) &amp;lt;/math&amp;gt; we computed for the REM in Problem 1. However, while the entropy counts all configurations at a given energy density, the complexity &amp;lt;math&amp;gt; \Sigma(\epsilon) &amp;lt;/math&amp;gt; accounts only for the stationary points.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4241</id>
		<title>T-6</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-6&amp;diff=4241"/>
		<updated>2026-03-15T14:43:16Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Problem 6: the Hessian at the stationary points, and random matrix theory */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal: &amp;lt;/strong&amp;gt; &lt;br /&gt;
Complete the characterisation of the energy landscape of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt; saddle point, random matrix theory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 6: the Hessian at the stationary points, and random matrix theory ===&lt;br /&gt;
&lt;br /&gt;
This is a continuation of problem 5. To get the complexity of the spherical &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;-spin, it remains to compute the expectation value of the determinant of the Hessian matrix: this is the goal of this problem. We will do this exploiting results from random matrix theory discussion in the &amp;lt;code&amp;gt;Tutorial and Exercise 4 &amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;em&amp;gt; Gaussian Random matrices. &amp;lt;/em&amp;gt; Show that the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;, defined in Problem 5, is a GOE matrix, i.e. a matrix taken from the Gaussian Orthogonal Ensemble, meaning that it is a symmetric matrix with distribution &amp;lt;math&amp;gt; P_N(M)= Z_N^{-1}\text{exp}(-\frac{N}{4 \sigma^2} \text{Tr} M^2) &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; Z_N &amp;lt;/math&amp;gt; is a normalization. What is the value of &amp;lt;math&amp;gt; \sigma^2 &amp;lt;/math&amp;gt;? &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Eigenvalue density and concentration. &amp;lt;/em&amp;gt; Let &amp;lt;math&amp;gt; \lambda_\alpha &amp;lt;/math&amp;gt; be the eigenvalues of the matrix &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt;. Show that the following identity holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}[|\text{det}  \left(M - p \epsilon \mathbb{I} \right)|]=  \mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right], \quad \quad \rho_{N-1}(\lambda)= \frac{1}{N-1} \sum_{\alpha=1}^{N-1} \delta (\lambda- \lambda_\alpha)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\rho_{N-1}(\lambda)&amp;lt;/math&amp;gt; is the empirical eigenvalue distribution. It can be shown that if &amp;lt;math&amp;gt; M &amp;lt;/math&amp;gt; is a GOE matrix, the distribution of the empirical distribution has a large deviation form with speed &amp;lt;math&amp;gt; N^2 &amp;lt;/math&amp;gt;, meaning that &amp;lt;math&amp;gt; P_N[\rho] = e^{-N^2 \, g[\rho]} &amp;lt;/math&amp;gt; where now &amp;lt;math&amp;gt; g[\cdot] &amp;lt;/math&amp;gt; is a functional. Using a saddle point argument, show that this implies &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathbb{E}\left[\text{exp} \left((N-1)  \int d \lambda \, \rho_{N-1}(\lambda) \, \log |\lambda - p \epsilon|\right) \right]=\text{exp} \left[N  \int d \lambda \,  \rho_\infty(\lambda+p \epsilon) \, \log |\lambda|+ o(N) \right]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \rho_\infty(\lambda) &amp;lt;/math&amp;gt; is the typical value of the eigenvalue density, which satisfies  &amp;lt;math&amp;gt; g[\rho_\infty]=0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The semicircle and the complexity.&amp;lt;/em&amp;gt; The eigenvalue density of GOE matrices is self-averaging, and it equals to &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{N \to \infty}\rho_N (\lambda)=\lim_{N \to \infty} \mathbb{E}[\rho_N(\lambda)]= \rho_\infty(\lambda)= \frac{1}{2 \pi \sigma^2}\sqrt{4 \sigma^2-\lambda^2 }&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt;Check this numerically: generate matrices for various values of &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt;, plot their empirical eigenvalue density and compare with the asymptotic curve. Is the convergence faster in the bulk, or in the edges of the eigenvalue density, where it vanishes?  &amp;lt;/li&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Combining all the results, show that the annealed complexity is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Sigma_{\text{a}}(\epsilon)= \frac{1}{2}\log [4 e (p-1)]- \frac{\epsilon^2}{2}+ I_p(\epsilon), \quad \quad  I_p(\epsilon)= \frac{2}{\pi}\int d x \sqrt{1-\left(x- \frac{\epsilon}{ \epsilon_{\text{th}}}\right)^2}\, \log |x| , \quad \quad  \epsilon_{\text{th}}= -2\sqrt{\frac{p-1}{p}}.&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
The integral &amp;lt;math&amp;gt;  I_p(\epsilon)&amp;lt;/math&amp;gt; can be computed explicitly, and one finds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
 I_p(\epsilon)= &lt;br /&gt;
\begin{cases}&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2} - \frac{\epsilon}{\epsilon_{\text{th}}}\sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1}+ \log \left( \frac{\epsilon}{\epsilon_{\text{th}}}+ \sqrt{\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-1} \right)- \log 2 \quad \text{if} \quad \epsilon \leq \epsilon_{\text{th}}\\&lt;br /&gt;
&amp;amp;\frac{\epsilon^2}{\epsilon_{\text{th}}^2}-\frac{1}{2}-\log 2 \quad \text{if} \quad \epsilon &amp;gt; \epsilon_{\text{th}}&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Plot the annealed complexity, and determine numerically where it vanishes: why is this a lower bound or the ground state energy density?&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The threshold and the stability.&amp;lt;/em&amp;gt;&lt;br /&gt;
 Sketch &amp;lt;math&amp;gt; \rho_\infty(\lambda+p \epsilon) &amp;lt;/math&amp;gt; for different values of &amp;lt;math&amp;gt; \epsilon &amp;lt;/math&amp;gt;; recalling that the Hessian encodes for the stability of the stationary points, show that there is a transition in the stability of the stationary points at the critical value of the energy density &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{\text{th}}= -2\sqrt{(p-1)/p}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When are the critical points stable local minima? When are they saddles? Why the stationary points at &amp;lt;math&amp;gt; \epsilon= \epsilon_{\text{th}}&amp;lt;/math&amp;gt; are called  &amp;lt;em&amp;gt; marginally stable &amp;lt;/em&amp;gt;?&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Metastable states, Hessian matrices, random matrix theory, landscape’s complexity.&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-9&amp;diff=4238</id>
		<title>T-9</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-9&amp;diff=4238"/>
		<updated>2026-03-11T10:38:06Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal:&amp;lt;/strong&amp;gt; the goal of this problem is to determine when the solution of the distributional equations corresponding to localization is unstable, providing an estimate of thee mobility edge on the Bethe lattice.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt;  stability analysis, Laplace transforms.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
In this Problem we determine for which values of parameters localization is stable, estimating the critical value of disorder where the transition to a delocalised phase occurs. Recall the results of Problem 8: the real and imaginary parts of the local self energy satisfy the self-consistent equations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Gamma_a= \sum_{b \in \partial a} t_{ab}^2 \frac{\Gamma_b + \eta}{(E- W\, V_b - R_b)^2+ (\Gamma_b +\eta)^2}, \quad \quad R_a =  \sum_{b \in \partial a} t_{ab}^2 \frac{E-  W\, V_b - R_b}{(E-  W\, V_b - R_b)^2+ (\Gamma_b +\eta)^2}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These equations admit the solution &amp;lt;math&amp;gt; \Gamma_a=\Gamma_b=0&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;\eta=0 &amp;lt;/math&amp;gt;, which corresponds to localization. We now determine when this solution becomes unstable. &lt;br /&gt;
&lt;br /&gt;
=== Problem 9: an estimate of the mobility edge ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Imaginary approximation and distributional equation. &amp;lt;/em&amp;gt; We consider the equations for &amp;lt;math&amp;gt; \Gamma_a &amp;lt;/math&amp;gt; and neglect the terms &amp;lt;math&amp;gt; R_b &amp;lt;/math&amp;gt; in the denominators, which couple the equations to those for the real parts of the self energies (“imaginary” approximation). Moreover, we assume to be in the localized phase, where &amp;lt;math&amp;gt; \Gamma_a \sim \eta \ll 1 &amp;lt;/math&amp;gt;. Finally, we set &amp;lt;math&amp;gt; t_{ab} \equiv t &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; E=0 &amp;lt;/math&amp;gt; for simplicity. Show that under these assumptions the probability density for the imaginary part, &amp;lt;math&amp;gt; P_\Gamma(\Gamma)&amp;lt;/math&amp;gt;, satisfies for &amp;lt;math&amp;gt; \tau=t/W&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
P_\Gamma(\Gamma)= \int \prod_{b=1}^k dV_b\,p(V_b)\int  \prod_{b=1}^k d\Gamma_b \, P_\Gamma(\Gamma_b) \delta \left(\Gamma - \tau^2 \sum_{b \in \partial a} \frac{\Gamma_b + \eta}{ V_b^2}  \right) &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Show that the Laplace transform of this distribution, &amp;lt;math&amp;gt; \Phi(s)=\int_0^\infty d\Gamma e^{-s \Gamma} P_\Gamma(\Gamma) &amp;lt;/math&amp;gt;, satisfies&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Phi(s)= \left[ \int dV\, p(V) e^{-\frac{s \tau^2 \eta}{V^2}} \Phi \left(\frac{s \tau^2 }{ V^2} \right)  \right]^k&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The stability analysis. &amp;lt;/em&amp;gt; We now assume to be in the localized phase, when for &amp;lt;math&amp;gt; \eta \to 0 &amp;lt;/math&amp;gt; the distribution &amp;lt;math&amp;gt; P_\Gamma(\Gamma) \to \delta (\Gamma)&amp;lt;/math&amp;gt;. We wish to check the stability of our assumption. This is done by controlling the tails of the distribution &amp;lt;math&amp;gt; P_\Gamma(\Gamma)&amp;lt;/math&amp;gt; for finite &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &lt;br /&gt;
For finite &amp;lt;math&amp;gt; \eta&amp;lt;/math&amp;gt;, we expect that typically  &amp;lt;math&amp;gt; \Gamma_a \sim \eta \ll 1 &amp;lt;/math&amp;gt;, and thus &amp;lt;math&amp;gt; P_\Gamma(\Gamma)&amp;lt;/math&amp;gt; should have a peak at this scale; however, we also expect &amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt; some power law decay &amp;lt;math&amp;gt;P_\Gamma(\Gamma)\sim \Gamma^{-\alpha} &amp;lt;/math&amp;gt; for large &amp;lt;math&amp;gt; \Gamma &amp;lt;/math&amp;gt;. &lt;br /&gt;
&amp;lt;!--and &amp;lt;math&amp;gt; \alpha \in (1, 3/2]&amp;lt;/math&amp;gt;. These tails are contributed by the events  in which one of the descendants &amp;lt;math&amp;gt; b &amp;lt;/math&amp;gt; is such that &amp;lt;math&amp;gt; \epsilon_b &amp;lt;/math&amp;gt; is very small, in which case &amp;lt;math&amp;gt; \Gamma_a \sim 1/ \epsilon_b^2 \gg 1 &amp;lt;/math&amp;gt;. &lt;br /&gt;
Show that if &amp;lt;math&amp;gt; \Gamma \sim 1/ \epsilon^2 &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p(\epsilon)&amp;lt;/math&amp;gt; is not gapped around zero, then &amp;lt;math&amp;gt;P_\Gamma(\Gamma) \sim \Gamma^{-3/2}&amp;lt;/math&amp;gt;, i.e. the distribution has tails contributed by these events in which the local random potential happen to be very small.  &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Assume more generally that  &amp;lt;math&amp;gt;P_\Gamma(\Gamma) \sim \Gamma^{-\alpha}&amp;lt;/math&amp;gt; for large &amp;lt;math&amp;gt; \Gamma &amp;lt;/math&amp;gt; and--&amp;gt;&lt;br /&gt;
Show, using a dimensional analysis argument,  that this corresponds to a non-analytic behaviour of the Laplace transform, &amp;lt;math&amp;gt; \Phi(s) \sim 1- A |s|^\beta &amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt; s &amp;lt;/math&amp;gt; small, with &amp;lt;math&amp;gt; \beta= \alpha-1 &amp;lt;/math&amp;gt;.  &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;  Show that the equation for &amp;lt;math&amp;gt; \Phi(s) &amp;lt;/math&amp;gt; gives for &amp;lt;math&amp;gt; s &amp;lt;/math&amp;gt; small &amp;lt;math&amp;gt;1- A s^\beta =1- A k \int dV \, p(V) \frac{s^\beta \tau^{2 \beta}}{V^{2 \beta}}+ o(s^\beta) &amp;lt;/math&amp;gt;, and therefore this is consistent provided that there exists a &amp;lt;math&amp;gt; \beta &amp;lt;/math&amp;gt; solving&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
1=k \int dV \, p(V) \left(\frac{\tau}{|V|}\right)^{2 \beta} \equiv k I(\beta).&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Bethe I(beta).png|thumb|left|x140px|Behaviour of the integral &amp;lt;math&amp;gt; I(\beta)&amp;lt;/math&amp;gt; in the case of uniformily distributed disorder, for &amp;lt;math&amp;gt;W&amp;lt; W_c &amp;lt;/math&amp;gt; .]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The critical disorder. &amp;lt;/em&amp;gt; Consider now local fields &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; taken from a uniform distribution in &amp;lt;math&amp;gt; [-1/2, 1/2] &amp;lt;/math&amp;gt;.  Compute &amp;lt;math&amp;gt; I(\beta) &amp;lt;/math&amp;gt; and show that it is non monotonic, with a local minimum &amp;lt;math&amp;gt; \beta^* &amp;lt;/math&amp;gt; in the interval &amp;lt;math&amp;gt; [0, 1/2]&amp;lt;/math&amp;gt;. Show that  &amp;lt;math&amp;gt; I(\beta^*) &amp;lt;/math&amp;gt; increases as the disorder is made weaker and weaker, and thus the transition to delocalisation occurs at the critical value of disorder when  &amp;lt;math&amp;gt; I(\beta^*)=k^{-1} &amp;lt;/math&amp;gt;. Show that this gives the following estimate for the critical disorder &amp;lt;math&amp;gt;(W/t)_c=1/\tau_c &amp;lt;/math&amp;gt; at which the transition to delocalisation occurs:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\frac{1}{\tau_c} =  \, 2 k e \log \left( \frac{1}{2 \tau_c}\right) \sim    \, 2  e \, k \log \left(k\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Why the critical disorder increases with &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;?&lt;br /&gt;
  &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - Why do we expect power law tails? Recall that in first approximation  &amp;lt;math&amp;gt; \Gamma \sim 1/|V|^2&amp;lt;/math&amp;gt;. If &amp;lt;math&amp;gt; V&amp;lt;/math&amp;gt; is uniformly distributed, then &amp;lt;math&amp;gt;P_\Gamma(\Gamma) \sim \Gamma^{-3/2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
Linearization and stability analysis, critical disorder, mobility edge.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4237</id>
		<title>T-8</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4237"/>
		<updated>2026-03-11T10:37:25Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* To know more */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal:&amp;lt;/strong&amp;gt;  the goal of this set of problems is to derive a criterion for localization on a peculiar lattice, the Bethe lattice. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt;  green functions, recursion relations, cavity method.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A criterion for localization: vanishing decay rate==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson model and return probability. &amp;lt;/strong&amp;gt; Consider a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;x &amp;lt;/math&amp;gt;. We call &amp;lt;math&amp;gt; |x \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; x &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H= W \sum_{x} V_x |x \rangle \langle x| - \sum_{&amp;lt;x, y&amp;gt;} t_{xy} \left(|x \rangle \langle y|+ |y \rangle \langle x| \right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(V)&amp;lt;/math&amp;gt;. Usually, the hopping amplitude &amp;lt;math&amp;gt; t_{xy}=t&amp;lt;/math&amp;gt; is chosen to be uniform. &lt;br /&gt;
&lt;br /&gt;
To see whether localization occurs, consider a quantum particle initialised on the site &amp;lt;math&amp;gt;x_0 &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When the system is not localized, the wave function of the particle spreads in time, and the return probability decays. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; We introduce two functions on the complex plane: the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{xy}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_x(z)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;z=E+ i \eta, \quad \quad \eta&amp;gt;0&amp;lt;/math&amp;gt; belongs to the upper half of the complex plane. The functions are defined by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{xy}(z)= \langle x| \frac{1}{z-H}| y \rangle , \quad \quad G_{x x}(z)= \langle x| \frac{1}{z-H}| x\rangle  = \frac{1}{z- V_x-\sigma_x(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
They are random functions, because the Hamiltonian contains randomness. The Green functions and the self-energies encode properties on the spectrum of the Hamiltonian&amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;, and they are related to the return probability, which can be written as (Laplace transform):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle =\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{x_0 x_0}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- V_{x_0}-\sigma_{x_0}(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson&#039;s criterion for localization. &amp;lt;/strong&amp;gt; If the self-energy has a non-zero imaginary part (when &amp;lt;math&amp;gt; N \to \infty &amp;lt;/math&amp;gt;):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\sigma_{x_0}(z)= R_{x_0}(z)-i \Gamma_{x_0}(z),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
then one can show (How? [[Media:2025 Self energies, decay rates and all that.pdf| See here for some notes]] ) that &amp;lt;math&amp;gt;\mathcal{A}_{x_0}(t)&amp;lt;/math&amp;gt; decays exponentially,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t) \sim A(t)e^{-\gamma t}+ B(t), \quad \quad \quad \gamma= \Gamma_{x_0}(V_{x_0}) + O(T^4),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and the system is not localized: the probability to find it, at &amp;lt;math&amp;gt; t \gg 1&amp;lt;/math&amp;gt;, in the same configuration where it was at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt; decays fast. For localization to occur, it must be that the imaginary part of &amp;lt;math&amp;gt; \sigma{x_0}(E+ i\eta)&amp;lt;/math&amp;gt; goes to zero when &amp;lt;math&amp;gt; \eta \to 0&amp;lt;/math&amp;gt;. Given the randomness, this criterion should however be formulated probabilistically:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{\eta \to 0} \lim_{N \to \infty} \mathbb{P}\left(- \Im \sigma_{x_0}(E+i \eta)&amp;gt;0 \right)=0 \quad  \Longrightarrow	 \quad \text{Localization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Showing that at strong enough disorder this condition is satisfied is the core of Anderson’s 1958 work.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; A dynamical transition. &amp;lt;/strong&amp;gt; &lt;br /&gt;
 Notice that in this criterion, the probability plays the role of an order parameter (like the magnetization in ferromagnets, or the average overlap distribution in spin glasses), and the &amp;lt;ins&amp;gt; imaginary part&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; plays the role of a symmetry breaking field (like the magnetic field in the ferromagnet, or the coupling between replicas in spin glasses). However, the localization transition has nothing to do with equilibrium, i.e., it is not related to a change of structure of the Gibbs Boltzmann measure; rather, it is a dynamical transition (like depinning!). Pushing the analogy with equilibrium phase transitions, one can say that the localised phase corresponds to the disordered phase (the one in which symmetry is not broken, like the paramagnetic phase). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - For example, the local density of eigenvalues &amp;lt;math&amp;gt;\nu_{x, N}(E)&amp;lt;/math&amp;gt; for an Hamiltonian on a lattice of size &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; is indeed given by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\nu_{x,N}(E)=-\frac{1}{\pi}\lim_{\eta \to 0} \Im  G_{xx}(E+ i \eta) = \sum_{\alpha=1}^N |\langle E_\alpha| x\rangle|^2 \delta(E-E_\alpha),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; are the eigenvalues of the full Hamiltonian &amp;lt;math&amp;gt; H &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |E_\alpha \rangle &amp;lt;/math&amp;gt; the corresponding eigenstates. When the kinetic term &amp;lt;math&amp;gt;T &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They encode how much the energy levels &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; (that are the eigenvalues when &amp;lt;math&amp;gt; V=0&amp;lt;/math&amp;gt;) are shifted by the presence of the kinetic (hopping) term.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; Given a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt;, we call &amp;lt;math&amp;gt; |a \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; a &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
H= W \sum_{a} \epsilon_a |a \rangle \langle a| - \sum_{&amp;lt;a, b&amp;gt;} V_{ab} \left(|a \rangle \langle b|+ |b \rangle \langle a| \right)\equiv H_0 + V&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; \epsilon_a &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(\epsilon)&amp;lt;/math&amp;gt;.&lt;br /&gt;
We introduce the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{ab}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_a(z):&amp;lt;/math&amp;gt; these are functions of a complex variable belonging to the upper half of the complex plane, &lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
z=E+ i \eta, \quad \quad \eta&amp;gt;0&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
and are defined by [NOTA SU STILTJIES]&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
G_{ab}(z)= \langle a| \frac{1}{z-H}| b \rangle , \quad \quad G_{aa}(z)= \langle a| \frac{1}{z-H}| a\rangle  = \frac{1}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
When the kinetic term &amp;lt;math&amp;gt;V &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They are random functions, because the Hamiltonian contains randomness.&lt;br /&gt;
&amp;lt;strong&amp;gt; Self-energies and return probabilities. &amp;lt;/strong&amp;gt; The local self-energies encode some information on the system’s dynamics, and thus on whether localization occurs. Consider a quantum particle initialised on the site &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{A}_a(t)= \theta(t) \langle a| e^{-i t H} | a \rangle =\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 8: Bethe lattice, cavity method &amp;amp; recursion relations ===&lt;br /&gt;
[[File:BetheLattice.png|thumb|right|x160px|A Bethe lattice with &amp;lt;math&amp;gt; k=2 &amp;lt;/math&amp;gt;.]]&lt;br /&gt;
The Bethe lattice is a lattice with a regular tree structure: each node has a fixed number of neighbours &amp;lt;math&amp;gt; k+1&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt; is the branching number, and there are no loops (see sketch). In these problems we consider the Anderson model on such lattice. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Green functions identities. &amp;lt;/em&amp;gt; Consider an Hamiltonian split into two parts, &amp;lt;math&amp;gt; H= H_0 + H_1 &amp;lt;/math&amp;gt;. Show that the following general relation for the Green functions holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G=G^0+ G^0 H_1 G, \quad \quad G^0 =\frac{1}{z-H_0}, \quad \quad G =\frac{1}{z-H}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Cavity equations. &amp;lt;/em&amp;gt;We now apply this to a specific example: we consider a Bethe lattice, and choose one site 0 as the root. We then choose &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; to be the kinetic terms connecting the root to its &amp;lt;math&amp;gt; k+1 &amp;lt;/math&amp;gt; neighbors &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H_1= -\sum_{i=1}^{k+1} t_{0 a_i} \left( |a_i \rangle \langle 0|+ |0 \rangle \langle a_i|\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
For all the &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt; i=1, \cdots, k+1 &amp;lt;/math&amp;gt; we introduce the notation &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i} \equiv G^0_{a_i a_i}, \quad \quad \sigma^{\text{cav}}_{a_i} \equiv \sigma^0_{a_i},&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;  \sigma^0 &amp;lt;/math&amp;gt; is the self energy associated to &amp;lt;math&amp;gt; G^0 &amp;lt;/math&amp;gt;. Show that, due to the geometry of the lattice, with this choice of &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; the Hamiltonian &amp;lt;math&amp;gt; H_0 &amp;lt;/math&amp;gt; is decoupled and &amp;lt;math&amp;gt; G^{\text{cav}}_{a_i} &amp;lt;/math&amp;gt; is the local Green function that one would have obtained removing the root 0 from the lattice, i.e., creating a “cavity” (hence the suffix). Moreover, using the Green function identities iterated twice, show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{00}(z)= \frac{1}{z-W \, V_0 - \sum_{i=1}^{k+1} t^2_{0 a_i}G^{\text{cav}}_{a_i}(z)}  &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Iterating this argument, show that if &amp;lt;math&amp;gt; \partial a_i &amp;lt;/math&amp;gt; denotes the collection of “descendants&amp;quot; of  &amp;lt;math&amp;gt; a_i&amp;lt;/math&amp;gt;, i.e. sites that are nearest neighbours of &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; &amp;lt;em&amp;gt; except&amp;lt;/em&amp;gt; the root, then&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i}(z)=   \frac{1}{z-W \,V_{a_i} - \sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)}, \quad \quad \sigma^{\text{cav}}_{a_i}(z)=\sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)=\sum_{b \in \partial a_i} \frac{t^2_{a_i b}}{z- W \, V_b - \sigma^{\text{cav}}_{b}(z)}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Equations for the distribution. &amp;lt;/em&amp;gt;  Justify why the cavity functions appearing in the denominators in the last equations above are independent and identically distributed random variables, and therefore the cavity equations can be interpreted as self-consistent equations for the distribution of the cavity functions.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The “localized&amp;quot; solution. &amp;lt;/em&amp;gt; We set &amp;lt;math&amp;gt; z=E+ i \eta &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \sigma^{\text{cav}}_{a}(z)= R_a(z) -i \Gamma_a(z)&amp;lt;/math&amp;gt;. Show that the cavity equation for the self-energies is equivalent to the following pair of coupled equations:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Gamma_a= \sum_{b \in \partial a} t_{ab}^2 \frac{\Gamma_b + \eta}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}, \quad \quad R_a =  \sum_{b \in \partial a} t_{ab}^2 \frac{E- W \, V_b - R_b}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Justify why the solution corresponding to localization, &amp;lt;math&amp;gt; \Gamma_a=0 &amp;lt;/math&amp;gt;, is always a solution when &amp;lt;math&amp;gt; \eta \to 0 &amp;lt;/math&amp;gt;; moreover, in the localized phase when &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; is finite but small one has &amp;lt;math&amp;gt; \Gamma_a \sim O(\eta) &amp;lt;/math&amp;gt;. How can one argue that this solution has to be discarded, i.e. that delocalisation occurs? &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Green functions, self-energies, return probability amplitude, decay rates, trees and cavity method, the criterion for localization.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* The Anderson model was formulated by P. W. Anderson in 1958, in the paper &amp;lt;em&amp;gt; Absence of diffusion in certain random lattices&amp;lt;/em&amp;gt;, Phys. Rev. 109, 1492.&lt;br /&gt;
* The Bethe lattice version is discussed in the later paper Abou-Chacra, Thouless, Anderson. &amp;lt;em&amp;gt; A selfconsistent theory of localization&amp;lt;/em&amp;gt; . Journal of Physics C: Solid State Physics 6.10 (1973)&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4236</id>
		<title>T-8</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=T-8&amp;diff=4236"/>
		<updated>2026-03-11T10:37:04Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Goal:&amp;lt;/strong&amp;gt;  the goal of this set of problems is to derive a criterion for localization on a peculiar lattice, the Bethe lattice. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;strong&amp;gt;Techniques: &amp;lt;/strong&amp;gt;  green functions, recursion relations, cavity method.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== A criterion for localization: vanishing decay rate==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson model and return probability. &amp;lt;/strong&amp;gt; Consider a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;x &amp;lt;/math&amp;gt;. We call &amp;lt;math&amp;gt; |x \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; x &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H= W \sum_{x} V_x |x \rangle \langle x| - \sum_{&amp;lt;x, y&amp;gt;} t_{xy} \left(|x \rangle \langle y|+ |y \rangle \langle x| \right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(V)&amp;lt;/math&amp;gt;. Usually, the hopping amplitude &amp;lt;math&amp;gt; t_{xy}=t&amp;lt;/math&amp;gt; is chosen to be uniform. &lt;br /&gt;
&lt;br /&gt;
To see whether localization occurs, consider a quantum particle initialised on the site &amp;lt;math&amp;gt;x_0 &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
When the system is not localized, the wave function of the particle spreads in time, and the return probability decays. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; We introduce two functions on the complex plane: the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{xy}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_x(z)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;z=E+ i \eta, \quad \quad \eta&amp;gt;0&amp;lt;/math&amp;gt; belongs to the upper half of the complex plane. The functions are defined by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{xy}(z)= \langle x| \frac{1}{z-H}| y \rangle , \quad \quad G_{x x}(z)= \langle x| \frac{1}{z-H}| x\rangle  = \frac{1}{z- V_x-\sigma_x(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
They are random functions, because the Hamiltonian contains randomness. The Green functions and the self-energies encode properties on the spectrum of the Hamiltonian&amp;lt;sup&amp;gt;[[#Notes|[*] ]]&amp;lt;/sup&amp;gt;, and they are related to the return probability, which can be written as (Laplace transform):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t)= \theta(t) \langle x_0| e^{-i t H} | x_0 \rangle =\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{x_0 x_0}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- V_{x_0}-\sigma_{x_0}(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Anderson&#039;s criterion for localization. &amp;lt;/strong&amp;gt; If the self-energy has a non-zero imaginary part (when &amp;lt;math&amp;gt; N \to \infty &amp;lt;/math&amp;gt;):&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\sigma_{x_0}(z)= R_{x_0}(z)-i \Gamma_{x_0}(z),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
then one can show (How? [[Media:2025 Self energies, decay rates and all that.pdf| See here for some notes]] ) that &amp;lt;math&amp;gt;\mathcal{A}_{x_0}(t)&amp;lt;/math&amp;gt; decays exponentially,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\mathcal{A}_{x_0}(t) \sim A(t)e^{-\gamma t}+ B(t), \quad \quad \quad \gamma= \Gamma_{x_0}(V_{x_0}) + O(T^4),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and the system is not localized: the probability to find it, at &amp;lt;math&amp;gt; t \gg 1&amp;lt;/math&amp;gt;, in the same configuration where it was at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt; decays fast. For localization to occur, it must be that the imaginary part of &amp;lt;math&amp;gt; \sigma{x_0}(E+ i\eta)&amp;lt;/math&amp;gt; goes to zero when &amp;lt;math&amp;gt; \eta \to 0&amp;lt;/math&amp;gt;. Given the randomness, this criterion should however be formulated probabilistically:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\lim_{\eta \to 0} \lim_{N \to \infty} \mathbb{P}\left(- \Im \sigma_{x_0}(E+i \eta)&amp;gt;0 \right)=0 \quad  \Longrightarrow	 \quad \text{Localization}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Showing that at strong enough disorder this condition is satisfied is the core of Anderson’s 1958 work.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; A dynamical transition. &amp;lt;/strong&amp;gt; &lt;br /&gt;
 Notice that in this criterion, the probability plays the role of an order parameter (like the magnetization in ferromagnets, or the average overlap distribution in spin glasses), and the &amp;lt;ins&amp;gt; imaginary part&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; plays the role of a symmetry breaking field (like the magnetic field in the ferromagnet, or the coupling between replicas in spin glasses). However, the localization transition has nothing to do with equilibrium, i.e., it is not related to a change of structure of the Gibbs Boltzmann measure; rather, it is a dynamical transition (like depinning!). Pushing the analogy with equilibrium phase transitions, one can say that the localised phase corresponds to the disordered phase (the one in which symmetry is not broken, like the paramagnetic phase). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:89%&amp;quot;&amp;gt;&lt;br /&gt;
: &amp;lt;small&amp;gt;[*]&amp;lt;/small&amp;gt; - For example, the local density of eigenvalues &amp;lt;math&amp;gt;\nu_{x, N}(E)&amp;lt;/math&amp;gt; for an Hamiltonian on a lattice of size &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; is indeed given by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\nu_{x,N}(E)=-\frac{1}{\pi}\lim_{\eta \to 0} \Im  G_{xx}(E+ i \eta) = \sum_{\alpha=1}^N |\langle E_\alpha| x\rangle|^2 \delta(E-E_\alpha),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; E_\alpha &amp;lt;/math&amp;gt; are the eigenvalues of the full Hamiltonian &amp;lt;math&amp;gt; H &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; |E_\alpha \rangle &amp;lt;/math&amp;gt; the corresponding eigenstates. When the kinetic term &amp;lt;math&amp;gt;T &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They encode how much the energy levels &amp;lt;math&amp;gt; V_x &amp;lt;/math&amp;gt; (that are the eigenvalues when &amp;lt;math&amp;gt; V=0&amp;lt;/math&amp;gt;) are shifted by the presence of the kinetic (hopping) term.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&amp;lt;li&amp;gt; &amp;lt;strong&amp;gt; Green functions and self-energies. &amp;lt;/strong&amp;gt; Given a lattice with &amp;lt;math&amp;gt; N &amp;lt;/math&amp;gt; sites &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt;, we call &amp;lt;math&amp;gt; |a \rangle &amp;lt;/math&amp;gt; the wave function completely localised in site &amp;lt;math&amp;gt; a &amp;lt;/math&amp;gt;. The Anderson model has Hamiltonian&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
H= W \sum_{a} \epsilon_a |a \rangle \langle a| - \sum_{&amp;lt;a, b&amp;gt;} V_{ab} \left(|a \rangle \langle b|+ |b \rangle \langle a| \right)\equiv H_0 + V&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
where the local fields &amp;lt;math&amp;gt; \epsilon_a &amp;lt;/math&amp;gt; are random variables, independent and distributed according to some distribution  &amp;lt;math&amp;gt; p(\epsilon)&amp;lt;/math&amp;gt;.&lt;br /&gt;
We introduce the &amp;lt;ins&amp;gt;Green functions&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; G_{ab}(z) &amp;lt;/math&amp;gt; and the &amp;lt;ins&amp;gt;local self-energies&amp;lt;/ins&amp;gt; &amp;lt;math&amp;gt; \sigma_a(z):&amp;lt;/math&amp;gt; these are functions of a complex variable belonging to the upper half of the complex plane, &lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
z=E+ i \eta, \quad \quad \eta&amp;gt;0&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
and are defined by [NOTA SU STILTJIES]&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
G_{ab}(z)= \langle a| \frac{1}{z-H}| b \rangle , \quad \quad G_{aa}(z)= \langle a| \frac{1}{z-H}| a\rangle  = \frac{1}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
When the kinetic term &amp;lt;math&amp;gt;V &amp;lt;/math&amp;gt; in the Hamiltonian vanishes, the local self-energies vanish. They are random functions, because the Hamiltonian contains randomness.&lt;br /&gt;
&amp;lt;strong&amp;gt; Self-energies and return probabilities. &amp;lt;/strong&amp;gt; The local self-energies encode some information on the system’s dynamics, and thus on whether localization occurs. Consider a quantum particle initialised on the site &amp;lt;math&amp;gt;a &amp;lt;/math&amp;gt; at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;. The &amp;lt;ins&amp;gt;return probability amplitude &amp;lt;/ins&amp;gt;, i.e. the probability amplitude to find the particle on the same site at later time, is&lt;br /&gt;
&amp;lt;center&amp;gt; &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{A}_a(t)= \theta(t) \langle a| e^{-i t H} | a \rangle =\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}e^{-i t z} G_{aa}(z)=\lim_{\eta \to 0} \int \frac{ dz}{2 \pi i}\frac{e^{-i t z}}{z- \epsilon_a-\sigma_a(z)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Problems ==&lt;br /&gt;
&lt;br /&gt;
=== Problem 8: Bethe lattice, cavity method &amp;amp; recursion relations ===&lt;br /&gt;
[[File:BetheLattice.png|thumb|right|x160px|A Bethe lattice with &amp;lt;math&amp;gt; k=2 &amp;lt;/math&amp;gt;.]]&lt;br /&gt;
The Bethe lattice is a lattice with a regular tree structure: each node has a fixed number of neighbours &amp;lt;math&amp;gt; k+1&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt; is the branching number, and there are no loops (see sketch). In these problems we consider the Anderson model on such lattice. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Green functions identities. &amp;lt;/em&amp;gt; Consider an Hamiltonian split into two parts, &amp;lt;math&amp;gt; H= H_0 + H_1 &amp;lt;/math&amp;gt;. Show that the following general relation for the Green functions holds:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G=G^0+ G^0 H_1 G, \quad \quad G^0 =\frac{1}{z-H_0}, \quad \quad G =\frac{1}{z-H}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Cavity equations. &amp;lt;/em&amp;gt;We now apply this to a specific example: we consider a Bethe lattice, and choose one site 0 as the root. We then choose &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; to be the kinetic terms connecting the root to its &amp;lt;math&amp;gt; k+1 &amp;lt;/math&amp;gt; neighbors &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
H_1= -\sum_{i=1}^{k+1} t_{0 a_i} \left( |a_i \rangle \langle 0|+ |0 \rangle \langle a_i|\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
For all the &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt; i=1, \cdots, k+1 &amp;lt;/math&amp;gt; we introduce the notation &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i} \equiv G^0_{a_i a_i}, \quad \quad \sigma^{\text{cav}}_{a_i} \equiv \sigma^0_{a_i},&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
where &amp;lt;math&amp;gt;  \sigma^0 &amp;lt;/math&amp;gt; is the self energy associated to &amp;lt;math&amp;gt; G^0 &amp;lt;/math&amp;gt;. Show that, due to the geometry of the lattice, with this choice of &amp;lt;math&amp;gt; H_1 &amp;lt;/math&amp;gt; the Hamiltonian &amp;lt;math&amp;gt; H_0 &amp;lt;/math&amp;gt; is decoupled and &amp;lt;math&amp;gt; G^{\text{cav}}_{a_i} &amp;lt;/math&amp;gt; is the local Green function that one would have obtained removing the root 0 from the lattice, i.e., creating a “cavity” (hence the suffix). Moreover, using the Green function identities iterated twice, show that &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G_{00}(z)= \frac{1}{z-W \, V_0 - \sum_{i=1}^{k+1} t^2_{0 a_i}G^{\text{cav}}_{a_i}(z)}  &lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
Iterating this argument, show that if &amp;lt;math&amp;gt; \partial a_i &amp;lt;/math&amp;gt; denotes the collection of “descendants&amp;quot; of  &amp;lt;math&amp;gt; a_i&amp;lt;/math&amp;gt;, i.e. sites that are nearest neighbours of &amp;lt;math&amp;gt; a_i &amp;lt;/math&amp;gt; &amp;lt;em&amp;gt; except&amp;lt;/em&amp;gt; the root, then&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
G^{\text{cav}}_{a_i}(z)=   \frac{1}{z-W \,V_{a_i} - \sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)}, \quad \quad \sigma^{\text{cav}}_{a_i}(z)=\sum_{b \in \partial a_i}t^2_{a_i b}G^{\text{cav}}_{b}(z)=\sum_{b \in \partial a_i} \frac{t^2_{a_i b}}{z- W \, V_b - \sigma^{\text{cav}}_{b}(z)}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; Equations for the distribution. &amp;lt;/em&amp;gt;  Justify why the cavity functions appearing in the denominators in the last equations above are independent and identically distributed random variables, and therefore the cavity equations can be interpreted as self-consistent equations for the distribution of the cavity functions.&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;em&amp;gt; The “localized&amp;quot; solution. &amp;lt;/em&amp;gt; We set &amp;lt;math&amp;gt; z=E+ i \eta &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \sigma^{\text{cav}}_{a}(z)= R_a(z) -i \Gamma_a(z)&amp;lt;/math&amp;gt;. Show that the cavity equation for the self-energies is equivalent to the following pair of coupled equations:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\Gamma_a= \sum_{b \in \partial a} t_{ab}^2 \frac{\Gamma_b + \eta}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}, \quad \quad R_a =  \sum_{b \in \partial a} t_{ab}^2 \frac{E- W \, V_b - R_b}{(E- W \, V_b - R_b)^2+ (\Gamma_b +\eta)^2}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Justify why the solution corresponding to localization, &amp;lt;math&amp;gt; \Gamma_a=0 &amp;lt;/math&amp;gt;, is always a solution when &amp;lt;math&amp;gt; \eta \to 0 &amp;lt;/math&amp;gt;; moreover, in the localized phase when &amp;lt;math&amp;gt; \eta &amp;lt;/math&amp;gt; is finite but small one has &amp;lt;math&amp;gt; \Gamma_a \sim O(\eta) &amp;lt;/math&amp;gt;. How can one argue that this solution has to be discarded, i.e. that delocalisation occurs? &lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Check out: key concepts ==&lt;br /&gt;
&lt;br /&gt;
Green functions, self-energies, return probability amplitude, decay rates, trees and cavity method, the criterion for localization.&lt;br /&gt;
&lt;br /&gt;
== To know more ==&lt;br /&gt;
* The Anderson model was formulated by P. W. Anderson in 1958, in the paper &amp;lt;em&amp;gt; Absence of diffusion in certain random lattices&amp;lt;/em&amp;gt;, Phys. Rev. 109, 1492.&lt;br /&gt;
* The Bethe lattice version is discussed in the later paper Abou-Chacra, Thouless, Anderson. A selfconsistent theory of localization. Journal of Physics C: Solid State Physics 6.10 (1973)&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4235</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4235"/>
		<updated>2026-03-11T10:36:00Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;!--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;!--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[XX| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see here for an example. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4234</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4234"/>
		<updated>2026-03-11T10:35:39Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] &amp;lt;--[[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]!--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] &amp;lt;--[[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]!--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[XX| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see here for an example. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
	<entry>
		<id>http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4233</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://www.lptms.universite-paris-saclay.fr//wikids/index.php?title=Main_Page&amp;diff=4233"/>
		<updated>2026-03-11T10:34:57Z</updated>

		<summary type="html">&lt;p&gt;Ros: /* Lectures and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the WIKI page of the M2 ICFP course on the Statistical Physics of Disordered Systems, second semester 2026.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Where and When =&lt;br /&gt;
&lt;br /&gt;
* Each Monday at 2pm - 6 pm, from January 19th to March 23rd. No lecture on 23/02/26.&lt;br /&gt;
* Room 202 in Jussieu campus, Tours 54-55 until 16th February&lt;br /&gt;
* Room 209 in Jussieu campus, Tours 56-66 209 from 2nd March  &#039;&#039;&#039;Attention: ROOM CHANGE!&#039;&#039;&#039;&lt;br /&gt;
* Each session is a mixture of lectures and exercises.&lt;br /&gt;
&lt;br /&gt;
= The Team =&lt;br /&gt;
&lt;br /&gt;
*  [https://vale1925.wixsite.com/vros  Valentina Ros] - vale1925@gmail.com&lt;br /&gt;
* [http://lptms.u-psud.fr/alberto_rosso/ Alberto Rosso] - alberto.rosso74@gmail.com&lt;br /&gt;
&lt;br /&gt;
= Course description =&lt;br /&gt;
&lt;br /&gt;
This course deals with systems in which the presence of impurities or amorphous structures (in other words, of disorder) influences radically the physics, generating novel phenomena. These phenomena involve the properties of the system at equilibrium (freezing and glass transitions), as well as their dynamical evolution out-of-equilibrium (pinning, avalanches), giving rise to ergodicity breaking both in absence and in presence of quantum fluctuations (classical metastability, quantum localization). We discuss the main statistical physics models that are able to capture the phenomenology of these systems, as well as the powerful theoretical tools (replica theory, large deviations, random matrix theory, scaling arguments, strong-disorder expansions) that have been developed to characterize quantitatively their physics. These theoretical tools nowadays have a huge impact in a variety of fields that go well-beyond statistical physics (computer science, probability, condensed matter, theoretical biology). Below is a list of topics discussed during the course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Finite-dimensional disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  Introduction to disordered systems and to the spin glass transition.&lt;br /&gt;
*  Interface growth. Directed polymers in random media. Scenarios for the glass transition: the glass transition in KPZ in d&amp;gt;2. &lt;br /&gt;
*  Depinning and avalanches. Bienaymé-Galton-Watson processes. &lt;br /&gt;
*  Anderson localization: introduction. Localization in 1D: transfer matrix and Lyapunov. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Mean-field disordered systems:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*  The simplest spin-glass: solution of the Random Energy Model.&lt;br /&gt;
*  The replica method: the solution of the spherical p-spin model. Sketch of the solution of Sherrington Kirkpatrick model (full RSB).&lt;br /&gt;
*  Towards glassy dynamics: rugged landscapes. Slow dynamics and aging: the trap model.&lt;br /&gt;
*  The Anderson model on the Bethe lattice: the mobility edge.&lt;br /&gt;
&lt;br /&gt;
=Lectures and tutorials=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the layout of the formulas is bad, it might be because you are using Safari. Try opening the wiki with Firefox or Chrome.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  14h00-15h45&lt;br /&gt;
! width=&amp;quot;500&amp;quot;|  16h00-17h45&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 1 (19/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L1_ICTS| Spin Glass Transition (Alberto)]]&lt;br /&gt;
&amp;lt;!--[[H_1|Exercises on Extreme Values Statistics]]--&amp;gt;&lt;br /&gt;
|  &lt;br /&gt;
* [[T-I|  A dictionary. The REM: energy landscape (Valentina)]]&amp;amp;nbsp; [[Media:2025 P1 solutions.pdf| Sol Prob.1 ]]&lt;br /&gt;
  |-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 2 (26/01)&lt;br /&gt;
|&lt;br /&gt;
* [[L2_ICFP| Stochastic Interfaces and growth (Alberto)]]&lt;br /&gt;
|&lt;br /&gt;
* [[T-2|The REM: freezing, condensation, glassiness (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P2 solutions.pdf| Sol Prob.2 ]]  &lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 3 (02/02)&lt;br /&gt;
|&lt;br /&gt;
* [[L-3|Directed polymer in random media (Alberto)]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-4| KPZ and glassiness in finite dimension (Alberto)]]  [[https://colab.research.google.com/drive/1PTya42ZS2kU87A-BxQFFIUDTs_k47men?usp=sharing| notebook]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 4 (9/02) and Week 5 (16/02)&lt;br /&gt;
|&lt;br /&gt;
* [[T-3| Spin glasses, equilibrium: replicas, the steps (Valentina)]]&amp;amp;nbsp;  [[Media:2025 P3 solutions.pdf| Sol Prob.3 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-4| Spin glasses, equilibrium: replicas, the interpretation (Valentina)]] &amp;amp;nbsp;  [[Media:2025 P4 solutions.pdf| Sol Probs. 4 ]]&amp;amp;nbsp; &lt;br /&gt;
[[Media:2025_Parisi_scheme.pdf| Notes: Probing states with replicas]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 6 (02/03)&lt;br /&gt;
|&lt;br /&gt;
* [[LBan-IV| Driven Disordered Materials  (Alberto)]] [[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[LBan-V| Avalanches in Disordered Materials (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| Week 7 (9/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-7| Anderson localization: introduction (Alberto)]]&lt;br /&gt;
| &lt;br /&gt;
* [[T-5| Rugged landscapes: counting metastable states (Valentina)]] &amp;amp;nbsp;[[Media:2025 P5 solutions.pdf| Sol Prob.5 ]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 8 (16/03)&lt;br /&gt;
|&lt;br /&gt;
* [[T-6| Rugged landscapes: stability of metastable states (Valentina)]] &amp;lt;!--[[Media:2025 P666 solutions .pdf| Sol Prob.6 ]]--&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
* [[T-7| Trap model and aging dynamics (Valentina)]] &amp;lt;!--[[Media:2025 P7 solutions .pdf| Sol Probs.7 ]]--&amp;gt;&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Week 9 (23/03)&lt;br /&gt;
|&lt;br /&gt;
* [[L-8| Localization in 1D, transfer matrix and Lyapunov exponent (Alberto)]]  [[https://colab.research.google.com/drive/1ZJ0yvMrtflWNNmPfaRQ8KTteoWfm0bqk?usp=sharing| notebook]]&lt;br /&gt;
|  &lt;br /&gt;
* [[L-9|Multifractality, tails (Alberto)]]&lt;br /&gt;
|-valign=“top&amp;quot;&lt;br /&gt;
| Extra&lt;br /&gt;
|&lt;br /&gt;
* [[T-8| Localization on Bethe lattice: cavity &amp;amp; recursion (Valentina)]] [[Media:2025 P8 solutions.pdf| Sol Prob.8 ]]&lt;br /&gt;
|  &lt;br /&gt;
* [[T-9| Localization on Bethe lattice: stability &amp;amp; mobility edge (Valentina)]] [[Media:2025 P9 solutions.pdf| Sol Prob.9 ]]&amp;amp;nbsp;&lt;br /&gt;
[[Media:2025_localization_notes.pdf| Notes: Localization: no dissipation, no self-bath]]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--== Homework ==&lt;br /&gt;
There are two homeworks: Homework 1 on Random Matrices, and Homework 2 on the topics of the lecture. &lt;br /&gt;
[[Media:RMT_introduction.pdf| Homework 1 on Random Matrices ]]&lt;br /&gt;
[[Media:2025_HW2.pdf| Homework 2 on topics of lectures ]]&lt;br /&gt;
Homework 1 is worth 5 points, Homework 2 is worth 15 points.&lt;br /&gt;
&#039;&#039;&#039;In the final grade calculation, the total score from both assignments will have a weight of 0.25, while the exam will account for 0.75.&#039;&#039;&#039;&lt;br /&gt;
Homework 1 due by Monday, February 17th.&lt;br /&gt;
Homework 2 due by Monday, March 24th.&lt;br /&gt;
&#039;&#039;&#039;Extra&#039;&#039;&#039;&lt;br /&gt;
Here is a notebook on random matrices (made by M. Biroli) with two coding exercises. You can download the notebook from the link below, and use the online platform: [https://jupyter.org/try-jupyter/lab/  Jupyter] to modify the notebook and add the solutions to the two exercises.&lt;br /&gt;
[https://colab.research.google.com/drive/13z_RnRlCq5p3ihDQulOPftqb05nsrTqQ?usp=sharing| Homework 1: notebook]--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 1: [[Media:Exercises_1-3.pdf| Exercises 1-3 on extreme value statistics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 2: &lt;br /&gt;
[[Media:Tutorial_and_Exercise_4.pdf| Tutorial and Exercise 4 on random matrices]] &amp;lt;br&amp;gt; &lt;br /&gt;
[[Media:Exercises_5-6.pdf| Exercises 5-6 on the random energy model]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 3: &lt;br /&gt;
[[Media:Exercises 7&amp;amp;8.pdf| Exercises 7-8 on interfaces]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 4: &lt;br /&gt;
[[Media:Exercises 9-10.pdf| Exercises 9-10 on glassiness]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 5 and 6: &lt;br /&gt;
[[Media:11-12_Exercises.pdf| Exercises 11-12 on dynamics]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 7: &lt;br /&gt;
[[Media:Exercises_13_15.pdf| Exercises 13-15 on branching and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Week 8: &lt;br /&gt;
[[XX| Exercises 16-17 on trap model and localization]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
[[Media:DISSYTS.pdf| Slides ]]&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Evaluation and exam =&lt;br /&gt;
&lt;br /&gt;
The exam will be on &#039;&#039;&#039;Monday, March 30th 2026&#039;&#039;&#039;. It will be written, 3h long. It consists of two parts:&lt;br /&gt;
&lt;br /&gt;
Part 1: theory questions, see here for an example. &lt;br /&gt;
&lt;br /&gt;
Part 2: you will be asked to solve pieces of the 17 exercises given to you in advance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You are not allowed to bring any material (printed notes, handwritten notes) nor to use any device during the exam.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
All relevant formulas will be provided in the text of the exam. There will be one  printed version of the WIKI pages available to you to consult. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The students have two possibilities:&lt;br /&gt;
&lt;br /&gt;
(1) A final written exam which counts for the total grade.&lt;br /&gt;
&lt;br /&gt;
(2) An homework assignement  + a written exam. The final grade is given by a weighted average of the two grades (the homework counts 1/4 and the written exam 3/4).&lt;br /&gt;
&#039;&#039;&#039;The written exam will be on Monday, March 31st 2025 in the Jussieu campus, Room 101, Tours 14 - 24, from 2pm to 5pm.&#039;&#039;&#039;--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ros</name></author>
	</entry>
</feed>