L-8: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
 
(9 intermediate revisions by the same user not shown)
Line 5: Line 5:
We consider disordered non-interacting particles hopping between nearest neighbors  sites on a lattice. The hamiltonian reads:
We consider disordered non-interacting particles hopping between nearest neighbors  sites on a lattice. The hamiltonian reads:
<center> <math>
<center> <math>
  H= - t \sum_{ <i, j> } (c_i^\dagger c_j +c_j^\dagger c_i) \sum_i \epsilon_i c_i^\dagger c_i  
  H= - t \sum_{ <i, j> } (c_i^\dagger c_j +c_j^\dagger c_i) \sum_i V_i c_i^\dagger c_i  
</math></center>
</math></center>
The single particle hamiltonian in 1d reads
The single particle hamiltonian in 1d reads
Line 24: Line 24:
The final goal is to study the statistical properties of eigensystem
The final goal is to study the statistical properties of eigensystem
<center> <math>
<center> <math>
  H \psi=\epsilon \psi, \quad \text{with} \sum_n |\psi_n|^2=1
  H \psi=\epsilon \psi, \quad \text{with} \sum_{n=1}^L |\psi_n|^2=1
</math></center>
</math></center>


Line 35: Line 35:


In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed  over disorder realization.
In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed  over disorder realization.
==  Eigenstates ==
In absence of disorder the eigenstates are plane waves delocalized along all the system. In presence of disorder, three situations can occur and to distinguish them it is useful to introduce the inverse participation ratio, IPR
<center><math>
IPR(q)=\sum_n |\psi_n|^{2 q} \sim L^{-\tau_q}
</math></center>
The normalization imposes <math>\tau_1 =0 </math>. For  <math>q=0</math>, <math>|\psi_n|^{2} =1 </math>, hence,  <math>\tau_0 =-d </math>.
* <Strong> Delocalized eigenstates</Strong> In this case, <math>|\psi_n|^{2} \approx L^{-d} </math>. Hence, we expect
<center><math>
IPR(q)=L^{d(1-q)}  \quad \tau_q=d(1-q)
</math></center>
* <Strong> Localized eigenstates</Strong> In this case, <math>|\psi_n|^{2} \approx 1/\xi_{\text{loc}}^{1/d} </math> for <math>\xi_{\text{loc}}^{d}</math> sites and almost zero elsewhere. Hence, we expect
<center><math>
IPR(q)= \text{const},  \quad \tau_q=0
</math></center>
* <Strong> Multifractal eigenstates.</Strong>  At the transition(  the mobility edge) an anomalous scaling is observed:
<center><math>
IPR(q)=L^{D_q(1-q)}  \quad \tau_q=D_q(1-q) 
</math></center>
Here <math>D_q</math> is q-dependent multifractal dimension, smaller than <math>d</math> and larger than zero.


==Transfer matrices and Lyapunov exponents==
==Transfer matrices and Lyapunov exponents==
Line 81: Line 57:
The distribution of <math>\Pi_N</math> is log-normal
The distribution of <math>\Pi_N</math> is log-normal
<center><math>
<center><math>
P(\Pi_N) = \frac{1}{\gamma_2^2 \sqrt{2 \pi N} \Pi_N} \exp\left[-\frac{(\ln(\Pi_N)-\gamma N)^2}{2 \gamma_2^2 N}\right]
P(\Pi_N) d \Pi_N = \frac{1}{ \sqrt{2 \pi \gamma_2^2 N}} \exp\left[-\frac{(\ln(\Pi_N)-\gamma N)^2}{2 \gamma_2^2 N}\right] \frac{d\Pi_N}{\Pi_N}
</math></center>
</math></center>


<Strong> Quenched  and Annealed averages </Strong>  
<Strong> Quenched  and Annealed averages </Strong>  


For the log-normal distribution the mean <math> \overline{\Pi_N} = \exp[(\gamma-\gamma_2^2) N]</math> is larger than the median value  <math> \Pi_N^{\text{median}} = \exp(\gamma N)</math> (which is larger than the mode). Hence, <math> \Pi_N </math> is not self averaging, while <math> \ln \Pi_N </math> is self averaging. This is the reason why in the following we will take quenched  averages.
To compute the moments of the log-normal distribution, it is convenient to introduce the variable
<center><math> X \equiv \ln(\Pi_N) </math></center> which is Gaussian distributed:
 
<center><math> p(X) = \frac{1}{ \sqrt{2 \pi \sigma^2}} \exp\left[-\frac{(X-\mu)^2}{2 \sigma^2}\right] </math></center>
with <math>\mu =\gamma N</math> and <math>\sigma^2=\gamma_2^2 N</math>.
 
The moments of <math>\Pi_N</math> can be easily computed: <center><math>\overline{\Pi_N^n} = \int dX \, e^{nX} p(X) = \exp\left[\mu n +\sigma^2 \frac{n^2}{2} \right]=\exp\left[(\gamma n +\gamma_2^2 \frac{n^2}{2})N \right] </math> </center>
 
The variable <math>\Pi_N</math> is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with <math>N</math> faster than its mean:
<center><math> \frac{\overline{\Pi_N^2}}{(\overline{\Pi_N})^2}= \exp\left[\gamma_2^2 N \right] </math></center>
 
Hence, <math>\Pi_N</math> is not self-averaging, while <math>\ln \Pi_N</math> is self-averaging.
 
In particular, the mean <math>\overline{\Pi_N} = \exp[(\gamma+\gamma_2^2/2) N]</math> grows much faster than the typical value <math>\Pi_N^{\text{typ}} \equiv \exp(\gamma N)</math>.


== Product of random matrices==
== Product of random matrices==

Latest revision as of 16:38, 23 March 2025

Goal: we will introduce the Anderson model, discuss the behaviour as a function of the dimension. In 1d localization can be connected to the product of random matrices.

Anderson model (tight binding model)

We consider disordered non-interacting particles hopping between nearest neighbors sites on a lattice. The hamiltonian reads:

The single particle hamiltonian in 1d reads

For simplicity we set the hopping . The disorder are iid random variables drawn, uniformly from the box .

The final goal is to study the statistical properties of eigensystem

Density of states (DOS)

In 1d and in absence of disorder, the dispersion relation is . From the dispersion relation, we compute the density of states (DOS) :

In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed over disorder realization.

Transfer matrices and Lyapunov exponents

Product of random variables and Central limit theorem

Consider a set of positive iid random variables with finite mean and variance and compute their product

For large N, the Central Limit Theorem predicts:

  • is a Gaussian number of zero mean and unit variance
  • are N indepent and can be written as

Log-normal distribution

The distribution of is log-normal

Quenched and Annealed averages

To compute the moments of the log-normal distribution, it is convenient to introduce the variable

which is Gaussian distributed:

with and .

The moments of can be easily computed:

The variable is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with faster than its mean:

Hence, is not self-averaging, while is self-averaging.

In particular, the mean grows much faster than the typical value .

Product of random matrices

Let's consider again the Anderson Model in 1d. The eigensystem is well defined in a box of size L with Dirichelet boundary condition on the extremeties of the box.

Here we will solve the second order differential equation imposing instead Cauchy boundaries on one side of the box. Let's rewrite the previous eigensystem in the following form

We can continue the recursion

It is useful to introduce the transfer matrix and their product

The Schrodinger equation can be written as

Fustenberg Theorem

We define the norm of a 2x2 matrix:

For large N, the Fustenberg theorem ensures the existence of the non-negative Lyapunov exponent, namely

In absence of disorder for . Generically the Lyapunov is positive, , and depends on and on the distribution of .

Consequences

Localization length

Together with the norm, also grows exponentially with n. We can write

which means that is performing a random walk with a drift.


However, our initial goal is a properly normalized eigenstate at energy . We need to switch from Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate and we can identify


Fluctuations

We expect strong fluctuations on quantites like , while their logarithm is self averaging.