L-8: Difference between revisions
(11 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
We consider disordered non-interacting particles hopping between nearest neighbors sites on a lattice. The hamiltonian reads: | We consider disordered non-interacting particles hopping between nearest neighbors sites on a lattice. The hamiltonian reads: | ||
<center> <math> | <center> <math> | ||
H= - t \sum_{ <i, j> } (c_i^\dagger c_j +c_j^\dagger c_i) \sum_i | H= - t \sum_{ <i, j> } (c_i^\dagger c_j +c_j^\dagger c_i) \sum_i V_i c_i^\dagger c_i | ||
</math></center> | </math></center> | ||
The single particle hamiltonian in 1d reads | The single particle hamiltonian in 1d reads | ||
Line 24: | Line 24: | ||
The final goal is to study the statistical properties of eigensystem | The final goal is to study the statistical properties of eigensystem | ||
<center> <math> | <center> <math> | ||
H \psi=\epsilon \psi, \quad \text{with} \ | H \psi=\epsilon \psi, \quad \text{with} \sum_{n=1}^L |\psi_n|^2=1 | ||
</math></center> | </math></center> | ||
Line 35: | Line 35: | ||
In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed over disorder realization. | In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed over disorder realization. | ||
==Transfer matrices and Lyapunov exponents== | ==Transfer matrices and Lyapunov exponents== | ||
Line 81: | Line 57: | ||
The distribution of <math>\Pi_N</math> is log-normal | The distribution of <math>\Pi_N</math> is log-normal | ||
<center><math> | <center><math> | ||
P(\Pi_N) = \frac{1}{ | P(\Pi_N) d \Pi_N = \frac{1}{ \sqrt{2 \pi \gamma_2^2 N}} \exp\left[-\frac{(\ln(\Pi_N)-\gamma N)^2}{2 \gamma_2^2 N}\right] \frac{d\Pi_N}{\Pi_N} | ||
</math></center> | </math></center> | ||
<Strong> Quenched and Annealed averages </Strong> | <Strong> Quenched and Annealed averages </Strong> | ||
To compute the moments of the log-normal distribution, it is convenient to introduce the variable | |||
<center><math> X \equiv \ln(\Pi_N) </math></center> which is Gaussian distributed: | |||
<center><math> p(X) = \frac{1}{ \sqrt{2 \pi \sigma^2}} \exp\left[-\frac{(X-\mu)^2}{2 \sigma^2}\right] </math></center> | |||
with <math>\mu =\gamma N</math> and <math>\sigma^2=\gamma_2^2 N</math>. | |||
The moments of <math>\Pi_N</math> can be easily computed: <center><math>\overline{\Pi_N^n} = \int dX \, e^{nX} p(X) = \exp\left[\mu n +\sigma^2 \frac{n^2}{2} \right]=\exp\left[(\gamma n +\gamma_2^2 \frac{n^2}{2})N \right] </math> </center> | |||
The variable <math>\Pi_N</math> is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with <math>N</math> faster than its mean: | |||
<center><math> \frac{\overline{\Pi_N^2}}{(\overline{\Pi_N})^2}= \exp\left[\gamma_2^2 N \right] </math></center> | |||
Hence, <math>\Pi_N</math> is not self-averaging, while <math>\ln \Pi_N</math> is self-averaging. | |||
In particular, the mean <math>\overline{\Pi_N} = \exp[(\gamma+\gamma_2^2/2) N]</math> grows much faster than the typical value <math>\Pi_N^{\text{typ}} \equiv \exp(\gamma N)</math>. | |||
== Product of random matrices== | == Product of random matrices== | ||
Line 168: | Line 157: | ||
Together with the norm, also <math> |\psi_n|^2</math> grows exponentially with n. We can write | Together with the norm, also <math> |\psi_n|^2</math> grows exponentially with n. We can write | ||
<center> | <center> <math> | ||
<math> \ln |\psi_n| \sim \gamma n + \gamma_2 \chi \sqrt{n}</math> | \ln |\psi_n| \sim \gamma n + \gamma_2 \chi \sqrt{n}</math> | ||
</center> | </center> | ||
which means that <math> \ln |\psi_n| </math> is performing a random walk with a drift. | which means that <math> \ln |\psi_n| </math> is performing a random walk with a drift. | ||
Line 175: | Line 164: | ||
However, our initial goal is a properly normalized eigenstate at energy <math>\epsilon </math>. We need to switch from Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate and we can identify | However, our initial goal is a properly normalized eigenstate at energy <math>\epsilon </math>. We need to switch from Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate and we can identify | ||
<center> | <center> <math> | ||
\xi_{\text{loc}}(\epsilon) \equiv \gamma^{-1}(\epsilon) | |||
</math> | |||
</center> | </center> | ||
Latest revision as of 16:38, 23 March 2025
Goal: we will introduce the Anderson model, discuss the behaviour as a function of the dimension. In 1d localization can be connected to the product of random matrices.
Anderson model (tight binding model)
We consider disordered non-interacting particles hopping between nearest neighbors sites on a lattice. The hamiltonian reads:
The single particle hamiltonian in 1d reads
For simplicity we set the hopping . The disorder are iid random variables drawn, uniformly from the box .
The final goal is to study the statistical properties of eigensystem
Density of states (DOS)
In 1d and in absence of disorder, the dispersion relation is . From the dispersion relation, we compute the density of states (DOS) :
In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed over disorder realization.
Transfer matrices and Lyapunov exponents
Product of random variables and Central limit theorem
Consider a set of positive iid random variables with finite mean and variance and compute their product
For large N, the Central Limit Theorem predicts:
- is a Gaussian number of zero mean and unit variance
- are N indepent and can be written as
Log-normal distribution
The distribution of is log-normal
Quenched and Annealed averages
To compute the moments of the log-normal distribution, it is convenient to introduce the variable
which is Gaussian distributed:
with and .
The moments of can be easily computed:
The variable is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with faster than its mean:
Hence, is not self-averaging, while is self-averaging.
In particular, the mean grows much faster than the typical value .
Product of random matrices
Let's consider again the Anderson Model in 1d. The eigensystem is well defined in a box of size L with Dirichelet boundary condition on the extremeties of the box.
Here we will solve the second order differential equation imposing instead Cauchy boundaries on one side of the box. Let's rewrite the previous eigensystem in the following form
We can continue the recursion
It is useful to introduce the transfer matrix and their product
The Schrodinger equation can be written as
Fustenberg Theorem
We define the norm of a 2x2 matrix:
For large N, the Fustenberg theorem ensures the existence of the non-negative Lyapunov exponent, namely
In absence of disorder for . Generically the Lyapunov is positive, , and depends on and on the distribution of .
Consequences
Localization length
Together with the norm, also grows exponentially with n. We can write
which means that is performing a random walk with a drift.
However, our initial goal is a properly normalized eigenstate at energy . We need to switch from Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate and we can identify
Fluctuations
We expect strong fluctuations on quantites like , while their logarithm is self averaging.