Goal: we will introduce the Anderson model, discuss the behaviour as a function of the dimension. In 1d localization can be connected to the product of random matrices.
Anderson model (tight binding model)
We consider disordered non-interacting particles hopping between nearest neighbors  sites on a lattice. The hamiltonian reads:
  
The single particle hamiltonian in 1d reads
  
For simplicity we set the hopping  . The disorder are iid random variables drawn, uniformly from the box
. The disorder are iid random variables drawn, uniformly from the box  .
.
The final goal is to study the statistical properties of eigensystem
  
Density of states (DOS)
In 1d and in absence of disorder, the dispersion relation is 
 . From the dispersion relation, we compute the density of states (DOS) :
. From the dispersion relation, we compute the density of states (DOS) :
 
In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed  over disorder realization.
Eigenstates
In absence of disorder the eigenstates are plane waves delocalized along all the system. In presence of disorder, three situations can occur and to distinguish them it is useful to introduce the inverse participation ratio, IPR
 
The normalization imposes  . For
. For   ,
,  , hence,
, hence,   .
. 
-  Delocalized eigenstates In this case,  . Hence, we expect . Hence, we expect
 
-  Localized eigenstates In this case,  for for sites and almost zero elsewhere. Hence, we expect sites and almost zero elsewhere. Hence, we expect
 
-  Multifractal eigenstates.  At the transition(  the mobility edge) an anomalous scaling is observed:
 
Here  is q-dependent multifractal dimension, smaller than
 is q-dependent multifractal dimension, smaller than  and larger than zero.
 and larger than zero.
Transfer matrices and Lyapunov exponents
Product of random variables and Central limit theorem
Consider a set of positive iid random variables   with finite mean and variance and compute their product
 with finite mean and variance and compute their product
 
For large N, the Central Limit Theorem predicts:
 
 is a Gaussian number of zero mean and unit variance is a Gaussian number of zero mean and unit variance
 are  N indepent and can be written as are  N indepent and can be written as
 
Log-normal distribution
The distribution of  is log-normal
 is log-normal
![{\displaystyle P(\Pi _{N})d\Pi _{N}={\frac {1}{\sqrt {2\pi \gamma _{2}^{2}N}}}\exp \left[-{\frac {(\ln(\Pi _{N})-\gamma N)^{2}}{2\gamma _{2}^{2}N}}\right]{\frac {d\Pi _{N}}{\Pi _{N}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/66654fcacf9b9a32161455cafeb1aefb296f1949) 
 Quenched  and Annealed averages  
To compute the moments of the log-normal distribution, it is convenient to introduce the variable

 which is Gaussian distributed: 
![{\displaystyle p(X)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\exp \left[-{\frac {(X-\mu )^{2}}{2\sigma ^{2}}}\right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0fa52905cde47cb12c7a7ec58a2773df3b2f1c76) 
 
with  and
 and  .
.
The moments of  can be easily computed:
 can be easily computed: 
![{\displaystyle {\overline {\Pi _{N}^{n}}}=\int dX\,e^{nX}p(X)=\exp \left[\mu n+\sigma ^{2}{\frac {n^{2}}{2}}\right]=\exp \left[(\gamma n+\gamma _{2}^{2}{\frac {n^{2}}{2}})N\right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/976045478e01a5654f536934612a6282a2d52d82) 
 
The variable  is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with
 is therefore not self-averaging (see Valentina's lecture 1) since its fluctuations grow with  faster than its mean:
 faster than its mean:
![{\displaystyle {\frac {\overline {\Pi _{N}^{2}}}{({\overline {\Pi _{N}}})^{2}}}=\exp \left[\gamma _{2}^{2}N\right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0144e5f6c95718ef9d35c014053bdf9cab7e40ad) 
Hence,  is not self-averaging, while
 is not self-averaging, while  is self-averaging.
 is self-averaging.
In particular, the mean ![{\displaystyle {\overline {\Pi _{N}}}=\exp[(\gamma +\gamma _{2}^{2}/2)N]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6264f81056f61473e6b93b0767e75ae6d3a12707) grows much faster than the typical value
 grows much faster than the typical value  .
.
Product of random matrices
Let's consider again the Anderson Model in 1d. The eigensystem is well defined in a box of size L with Dirichelet boundary condition on the extremeties of the box.
Here we will solve the second order differential equation  imposing instead Cauchy boundaries on one side of the box. Let's rewrite the previous eigensystem in the following form
  
We can continue the recursion
  
It is useful to introduce the transfer matrix and their product
  
The Schrodinger equation can  be written as
  
Fustenberg Theorem
We define the norm of a 2x2 matrix:
  
For large N, the Fustenberg theorem ensures the existence of the non-negative Lyapunov  exponent, namely
  
In absence of disorder  for
 for  . Generically the Lyapunov is positive,
. Generically the Lyapunov is positive,  , and depends on
, and depends on  and on the distribution of
 and on the distribution of  .
.
Consequences
 Localization length
Together with the norm, also    grows exponentially with n. We can write
 grows exponentially with n. We can write
   
which means that  is performing a random walk with a drift.
 is performing a random walk with a drift.
However, our initial goal is a properly normalized eigenstate at energy  . We need  to switch from  Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at  the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate  and we can identify
. We need  to switch from  Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at  the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate  and we can identify
  
 Fluctuations
We expect strong fluctuations on quantites like  , while their logarithm is self averaging.
, while their logarithm is self averaging.