TBan-I: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
 
(36 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Detour: Extreme Value Statistics=
In the following exercises, we will use the notation from extreme value statistics as introduced in the course.


= Exercise 1: The Gaussian case = 
Let us analyze in detail the case of a Gaussian distribution with zero mean and variance <math>\sigma^2</math>. Using integration by parts, we can write :
<center> <math>P(E) = \int_{-\infty}^E \frac{dx}{\sqrt{2 \pi \sigma^2}} \, e^{-\frac{x^2}{2 \sigma^2}} = \frac{1}{2\sqrt{\pi}} \int_{\frac{E^2}{2 \sigma^2}}^{\infty} \frac{dt}{\sqrt{t}} e^{-t}= \frac{\sigma}{\sqrt{2 \pi} |E|} e^{-\frac{E^2}{2 \sigma^2}} - \frac{1}{4\sqrt{\pi}} \int_{\frac{E^2}{2 \sigma^2}}^{\infty} \frac{dt}{t} e^{-t} </math> </center>
The asymptotic expansion for <math>E \to -\infty</math>  is :
<center> <math>P(E) \approx \frac{\sigma}{\sqrt{2 \pi} |E|} e^{-\frac{E^2}{2 \sigma^2}} + O(\frac{e^{-\frac{E^2}{2 \sigma^2}}}{ |E|^2} ) </math> </center>


'''Definition of <math> n(x) </math>:'''


Given a realization, <math> n(x) </math> is defined as the number of random variables above the minimum <math>E_{\min} </math> such that their value is smaller than <math>E_{\min} +x</math>. This quantity is a random variable, and we are interested in its average value:
In general, the variable <math>z = (E - a_M)/b_M</math> is distributed according to an ''M''-independent distribution.
<center><math> \overline{n(x)} = \sum_k k \, \text{Prob}[n(x) = k] </math></center>


It is possible to generalize this result and classify the scaling forms into the '''Gumbel universality class''': 


The key relation for this quantity is:
* '''Characteristics:''' 
<center><math> \text{Prob}[n(x) = k] = M \binom{M-1}{k} </math></center>
** Applies when the tails of <math>p(E)</math> decay faster than any power law. 
We use the following identity to sum over <math>k</math>:
** Examples: the Gaussian case discussed here or exponential distributions <math>p(E) = \exp(E) \quad \text{with} \quad E \in (-\infty, 0)</math>. 
 
* '''Scaling Form:'''  <math> \exp(z)\,\exp(-e^{z}) </math>
 
 
== Exercise 2: The Weakest Link and the Weibull Distribution == 
 
Consider a chain of length <math>L</math> subjected to a tensile force <math>F</math>. 
Define <math>F_c</math> as the force required to break the chain. 
The goal of this exercise is to determine how <math>F_c</math> depends on <math>L</math> and to characterize its sample-to-sample fluctuations. 
Throughout the exercise, you work in the limit of large <math>L</math>.
 
 
Let <math>x_1, x_2, \dots, x_L</math> denote the strengths of the individual links. 
Assume that these are positive, identically distributed, and independent random variables. 
Consider the Gamma distribution with shape parameter <math>\alpha > 0</math> and <math>\Gamma(\alpha)</math> the Gamma function: 
 
<center><math>
p(x) = \frac{x^{\alpha - 1}}{\Gamma(\alpha)} e^{-x}
</math></center>
 
'''Questions:''' 
 
* Compute the typical value <math>F_c^{ typ}</math> and discuss its dependence on <math>L</math>. 
 
 
* According to extreme value theory, the probability that the weakest link is smaller than <math>x</math> is
<center><math>
Q_L(x) \sim \exp\!\bigl[-L P(x)\bigr]
= \exp\!\!\left[-L \int_0^x p(t) \, dt \right]
</math></center>
Use the change of variable <math>z = \frac{x - a_L}{b_L}</math> with <math>a_L = 0</math> and <math>b_L = F_c^{typ}</math> to find an <math>L</math>-independent distribution.
 
=Exercise 3: number of states above the minimum=
 
 
'''Definition of <math> n(x) </math>:'''Given a realization of the random energies <math>{E_1, E_2, \ldots, E_M}</math>, define
 
<center><math> n(x) = \#\{ i \mid E_{\min} < E_i < E_{\min}+x \} </math></center> that is, the number of random variables lying above the minimum <math>E_{\min}</math> but less than <math>E_{\min}+x</math>. This is itself a random variable. We are interested in its mean value: <center><math> \overline{n(x)} = \sum_{k=0}^{M-1} k \, \text{Prob}[n(x)=k] </math></center>
 
''' The Final goal''' is to show that, for large ''M'' (when the extremes are described by the Gumbel distribution), you have:
<center><math> \overline{n(x)}  = e^{x/b_M}-1 </math></center>
 
'''Step 1: Exact manipulations:''' You start from the exact expression for the probability of
<math>k</math> states in the interval:
<center><math> \text{Prob}[n(x)=k] = M \binom{M-1}{k} \int_{-\infty}^\infty dE \; p(E)\,[P(E+x)-P(E)]^{k}\,[1-P(E+x)]^{M-k-1} </math></center>
To compute <math>\overline{n(x)}</math>, you must sum over <math>k</math>.
Use the identity
<center><math> \sum_{k=0}^{M-1} k \binom{M-1}{k} (A-B)^k B^{M-1-k} = (A-B)\frac{d}{dA} \sum_{k=0}^{M-1} \binom{M-1}{k} (A-B)^k B^{M-1-k} = (M-1)(A-B)A^{M-2} </math></center>
<center><math> \sum_{k=0}^{M-1} k \binom{M-1}{k} (A-B)^k B^{M-1-k} = (A-B)\frac{d}{dA} \sum_{k=0}^{M-1} \binom{M-1}{k} (A-B)^k B^{M-1-k} = (M-1)(A-B)A^{M-2} </math></center>
to arrive at the form:
<center><math> \overline{n(x)} = M (M-1) \int_{-\infty}^\infty dE \; p(E) \left[P(E+x) - P(E)\right] (1-P(E))^{M-2}= - M \int_{-\infty}^\infty dE \; \left[P(E+x) - P(E)\right] \frac{d Q_{M-1}(E)}{dE} </math></center>
where <math>Q_{M-1}(E) = [1-P(E)]^{M-1}</math>.
'''Step 2: the Gumbel limit ''' So far, no approximations have been made. To proceed, we use <math> Q_{M-1}(E)\approx Q_M(E)</math> and its  asymptotics Gumbel form:
<center><math>
- \frac{d Q_{M-1}(E)}{dE} \, dE
\;\sim\;
\exp\!\!\left(\frac{E-a_M}{b_M}\right)
\exp\!\!\left[-\exp\!\!\left(\frac{E-a_M}{b_M}\right)\right]
\frac{dE}{b_M}
= e^{z} e^{-e^{z}} dz
</math></center> 
where <math>z = (E-a_M)/b_M</math>.
The main contribution to the integral comes from the region near  <math>E \approx a_M</math>, where <math>P(E) \approx e^{(E-a_M)/b_M}/M</math>.
Compute the integral and verify that you obtain:
<center><math>
\overline{n(x)} = \bigl(e^{x/b_M}-1\bigr)
\int_{-\infty}^{\infty} dz \, e^{2z - e^z}
= e^{x/b_M}-1
</math></center>

Latest revision as of 14:13, 13 September 2025

In the following exercises, we will use the notation from extreme value statistics as introduced in the course.

Exercise 1: The Gaussian case

Let us analyze in detail the case of a Gaussian distribution with zero mean and variance . Using integration by parts, we can write :

The asymptotic expansion for  is :


In general, the variable is distributed according to an M-independent distribution.

It is possible to generalize this result and classify the scaling forms into the Gumbel universality class:

  • Characteristics:
    • Applies when the tails of decay faster than any power law.
    • Examples: the Gaussian case discussed here or exponential distributions .
  • Scaling Form:


Exercise 2: The Weakest Link and the Weibull Distribution

Consider a chain of length subjected to a tensile force . Define as the force required to break the chain. The goal of this exercise is to determine how depends on and to characterize its sample-to-sample fluctuations. Throughout the exercise, you work in the limit of large .


Let denote the strengths of the individual links. Assume that these are positive, identically distributed, and independent random variables. Consider the Gamma distribution with shape parameter and the Gamma function:

Questions:

  • Compute the typical value and discuss its dependence on .


  • According to extreme value theory, the probability that the weakest link is smaller than is

Use the change of variable with and to find an -independent distribution.

Exercise 3: number of states above the minimum

Definition of :Given a realization of the random energies , define

that is, the number of random variables lying above the minimum but less than . This is itself a random variable. We are interested in its mean value:

The Final goal is to show that, for large M (when the extremes are described by the Gumbel distribution), you have:

Step 1: Exact manipulations: You start from the exact expression for the probability of states in the interval:

To compute , you must sum over . Use the identity

to arrive at the form:

where .

Step 2: the Gumbel limit So far, no approximations have been made. To proceed, we use and its asymptotics Gumbel form:

where .

The main contribution to the integral comes from the region near , where .


Compute the integral and verify that you obtain: