L-8: Difference between revisions

From Disordered Systems Wiki
Jump to navigation Jump to search
 
(14 intermediate revisions by the same user not shown)
Line 1: Line 1:
<Strong>Goal:</Strong> we will introduce the Anderson model, discuss the behaviour as a function of the dimension. In 1d localization can be connected to the product of random matrices.
'''Goal.''' We introduce the Anderson model and study the statistical properties of its eigenstates.
In one dimension disorder leads to localization of all eigenstates, which can be understood using products of random matrices.


= Anderson model (tight binding model)=  
= Anderson model (tight-binding model) =


We consider disordered non-interacting particles hopping between nearest neighbors  sites on a lattice. The hamiltonian reads:
We consider non-interacting particles hopping between nearest-neighbour sites of a lattice in the presence of disorder.
<center> <math>
H= - t \sum_{ <i, j> } (c_i^\dagger c_j +c_j^\dagger c_i) \sum_i \epsilon_i c_i^\dagger c_i
</math></center>
The single particle hamiltonian in 1d reads
<center> <math>
H =
\begin{bmatrix}
V_1 & -t & 0 & 0 & 0 & 0 \\
-t & V_2 & -t & 0 & 0 & 0 \\
0 & -t & V_3 & -t & 0 & 0 \\
0 & 0  & -t & \ldots &-t & 0\\
0 & 0  & 0  & -t & \ldots & -t\\
0 & 0  & 0  & 0 & -t & V_L
\end{bmatrix}
</math></center>


For simplicity we set the hopping <math>t=1 </math>. The disorder are iid random variables drawn, uniformly from the box <math>(-\frac{W}{2},\frac{W}{2})</math>.
The Hamiltonian reads


The final goal is to study the statistical properties of eigensystem
<math display="block">
<center> <math>
H =
H \psi=\epsilon \psi, \quad \text{with} \sum_n |\psi_n|^2=1
- t \sum_{\langle i,j\rangle} (c_i^\dagger c_j + c_j^\dagger c_i)
</math></center>
+
\sum_i V_i c_i^\dagger c_i .
</math>


== Density of states (DOS)==
The random variables <math>V_i</math> represent on-site disorder. 
For simplicity we set <math>t=1</math>.


In 1d and in absence of disorder, the dispersion relation is
We assume that the disorder variables are independent and identically distributed, uniformly in the interval
<math> \epsilon(k) = -2 \cos k, \quad  k \in (-\pi, \pi), -2< \epsilon(k)< 2 </math>. From the dispersion relation, we compute the density of states (DOS) :
<center><math>
\rho(\epsilon) =\int_{-\pi}^\pi \frac{d k}{2 \pi} \delta(\epsilon-\epsilon(k))=\frac{1}{\pi } \frac{1}{\sqrt{4-\epsilon^2}} \quad \text{for } \epsilon \in (-2,2)</math></center>


In presence of disorder the DOS becomes larger, and display sample to sample fluctuations. One can consider its mean value, avergaed  over disorder realization.
<math display="block">
V_i \in \left(-\frac{W}{2},\frac{W}{2}\right).
</math>


==  Eigenstates ==
In one dimension, the single-particle Hamiltonian is represented by the tridiagonal matrix


In absence of disorder the eigenstates are plane waves delocalized along all the system. In presence of disorder, three situations can occur and to distinguish them it is useful to introduce the inverse participation ratio, IPR
<math display="block">
<center><math>
H =
IPR(q)=\sum_n |\psi_n|^{2 q} \sim L^{-\tau_q}
\begin{pmatrix}
</math></center>
V_1 & -1 & 0 & 0 & \dots \\
The normalization imposes <math>\tau_1 =0 </math>. For  <math>q=0</math>, <math>|\psi_n|^{2} =1 </math>, hence,  <math>\tau_0 =-d </math>.
-1 & V_2 & -1 & 0 & \dots \\
0 & -1 & V_3 & -1 & \dots \\
0 & 0 & -1 & \ddots & -1 \\
\dots & \dots & \dots & -1 & V_L
\end{pmatrix}.
</math>


* <Strong> Delocalized eigenstates</Strong> In this case, <math>|\psi_n|^{2} \approx L^{-d} </math>. Hence, we expect
We study the statistical properties of the eigenvalue problem
<center><math>
IPR(q)=L^{d(1-q)}  \quad \tau_q=d(1-q)
</math></center>


* <Strong> Localized eigenstates</Strong> In this case, <math>|\psi_n|^{2} \approx 1/\xi_{\text{loc}}^{1/d} </math> for <math>\xi_{\text{loc}}^{d}</math> sites and almost zero elsewhere. Hence, we expect
<math display="block">
<center><math>
H\psi = \epsilon \psi ,
IPR(q)= \text{const}\quad \tau_q=0
\qquad
</math></center>
\sum_{n=1}^L |\psi_n|^2 = 1 .
</math>


* <Strong> Multifractal eigenstates.</Strong>  At the transition(  the mobility edge) an anomalous scaling is observed:
In the one-particle sector, this becomes a discrete Schrödinger equation.
<center><math>
IPR(q)=L^{D_q(1-q)}  \quad \tau_q=D_q(1-q) 
</math></center>
Here <math>D_q</math> is q-dependent multifractal dimension, smaller than <math>d</math> and larger than zero.


==Transfer matrices and Lyapunov exponents==
---


=== Product of random variables and Central limit theorem ===
== Density of states ==


Consider a set of positive iid random variables  <math>x_1,x_2,\ldots x_N</math> with finite mean and variance and compute their product
Without disorder the dispersion relation is
<center><math>
\Pi_N=  \prod_{n=1}^N x_i, \quad \text{or} \;  \ln \Pi_N=  \sum_{n=1}^N \ln x_i
</math></center>
For large N, the Central Limit Theorem predicts:
<center><math>
\log \Pi_N= \gamma N + \gamma_2 \sqrt{N} \chi
</math></center>
* <math>\chi</math> is a Gaussian number of zero mean and unit variance
* <math>\gamma, \gamma_2</math> are  N indepent and can be written as 
<center><math>
\gamma =\overline{\ln x}< \ln \overline{x}, \quad \gamma_2= \sqrt{\overline{(\ln x)^2}-(\overline{\ln x})^2}
</math></center>


=== Log-normal distribution ===
<math display="block">
The distribution of <math>\Pi_N</math> is log-normal
\epsilon(k) = -2\cos k,
<center><math>
\qquad
P(\Pi_N) d \Pi_N = \frac{1}{ \sqrt{2 \pi \gamma_2^2 N}} \exp\left[-\frac{(\ln(\Pi_N)-\gamma N)^2}{2 \gamma_2^2 N}\right] \frac{d\Pi_N}{\Pi_N}
k\in(-\pi,\pi).
</math></center>
</math>


<Strong> Quenched  and Annealed averages </Strong>
The energy band is therefore


To compute the moments of the log-normal distribution, it is convenient to introduce the variable
<math display="block">
<center><math> X \equiv \ln(\Pi_N) </math></center> which is Gaussian distributed:
-2 < \epsilon < 2 .
</math>


<center><math> p(X) = \frac{1}{ \sqrt{2 \pi \sigma^2}} \exp\left[-\frac{(X-\mu)^2}{2 \sigma^2}\right] </math></center>
The density of states is
with <math>\mu =\gamma N</math> and <math>\sigma^2=\gamma_2^2 N</math>


The moments of <math>\Pi_N</math> can be easily computed: <math>\overline{\Pi_N^n} = \int dX \, e^{nX} p(X) = \exp\left[\mu n +\sigma^2 \frac{n^2}{2} \right]=\exp\left[(\gamma n +\gamma_2^2 \frac{n^2}{2})N \right]  </math>  
<math display="block">
\rho(\epsilon)
=
\int_{-\pi}^{\pi}
\frac{dk}{2\pi}
\delta(\epsilon-\epsilon(k))
=
\frac{1}{\pi\sqrt{4-\epsilon^2}}
\qquad
(\epsilon\in(-2,2)).
</math>
 
In the presence of disorder the density of states broadens and becomes sample dependent.


== Transfer matrices ==


For the log-normal distribution the mean <math> \overline{\Pi_N} = \exp[(\gamma+\gamma_2^2/2) N]</math> is larger than the median value  <math> \Pi_N^{\text{median}} = \exp(\gamma N)</math> (which is larger than the mode <math> \overline{\Pi_N^{mode}} = \exp[(\gamma-\gamma_2^2) N]</math>). Hence, <math> \Pi_N </math> is not self averaging, while <math> \ln \Pi_N </math> is self averaging. This is the reason why in the following we will take quenched  averages.
The discrete Schrödinger equation reads


== Product of random matrices==
<math display="block">
-\psi_{n+1} - \psi_{n-1} + V_n \psi_n = \epsilon \psi_n .
</math>


Let's consider again the Anderson Model in 1d. The eigensystem is well defined in a box of size L with Dirichelet boundary condition on the extremeties of the box.
It can be rewritten as


Here we will solve the second order differential equation  imposing instead Cauchy boundaries on one side of the box. Let's rewrite the previous eigensystem in the following form
<math display="block">
<center> <math>
\begin{pmatrix}
\begin{bmatrix}
\psi_{n+1} \\
\psi_{n+1} \\
\psi_{n}
\psi_n
\end{bmatrix}
\end{pmatrix}
=
=
\begin{bmatrix}
T_n
V_n -\epsilon & -1 \\
\begin{pmatrix}
\psi_n \\
\psi_{n-1}
\end{pmatrix}
</math>
 
with
 
<math display="block">
T_n =
\begin{pmatrix}
V_n-\epsilon & -1 \\
1 & 0
1 & 0
\end{bmatrix}  \begin{bmatrix}
\end{pmatrix}.
\psi_{n} \\
</math>
\psi_{n-1}
 
\end{bmatrix}
Iterating gives
</math></center>
 
We can continue the recursion
<math display="block">
<center> <math>
\begin{pmatrix}
\begin{bmatrix}
\psi_{n+1} \\
\psi_{n+1} \\
\psi_{n}
\psi_n
\end{bmatrix}
\end{pmatrix}
=
=
\begin{bmatrix}
\Pi_n
V_n -\epsilon & -1 \\
\begin{pmatrix}
\psi_1 \\
\psi_0
\end{pmatrix},
\qquad
\Pi_n = T_n T_{n-1} \cdots T_1 .
</math>
 
Thus the wavefunction is controlled by a product of random matrices.
 
=== Remark ===
 
The transfer matrix formulation rewrites the problem as a recursion, starting from initial data and propagating the solution along the chain.
 
This corresponds to a Cauchy problem rather than a boundary value problem. Nevertheless, the growth properties of the solutions contain direct information about the nature of the eigenstates: exponential growth is associated with localization, while bounded solutions correspond to extended states.
 
== Furstenberg theorem (physical formulation) ==
 
The exponential growth of the product of random matrices
<math display="block">
\Pi_n = T_n T_{n-1} \cdots T_1
</math>
can be understood as a matrix analogue of the law of large numbers. A set of simple sufficient conditions for this behavior is provided by a theorem of Harry Furstenberg.
 
To quantify this growth, we introduce the norm of a matrix <math>A</math> as
<math display="block">
\|A\|^2 = \frac{a_{11}^2 + a_{12}^2 + a_{21}^2 + a_{22}^2}{2}.
</math>
 
In physical terms, the mechanism behind the theorem relies on two key ingredients:
 
=== (i) Stretching ===
 
The transfer matrices must be able, with nonzero probability, to expand vectors.
 
More precisely, there exist realizations of <math>T_n</math> such that for some direction <math>v</math>,
<math display="block">
\|T_n v\| = \sigma_{\max}(T_n)\, \|v\|,
\qquad \sigma_{\max}(T_n) > 1.
</math>
 
Equivalently, <math>\sigma_{\max}^2(T_n)</math> is the largest eigenvalue of
<math display="block">
T_n^T T_n.
</math>
 
This ensures that typical products contain episodes of exponential amplification.
 
=== (ii) Mixing of directions ===
 
At each step, the transfer matrix changes the direction of the vector <math>(\psi_n,\psi_{n-1})</math>.
 
In the clean case, the same matrix is applied at every step, so the angular dynamics is deterministic and can be written as
<math display="block">
\theta_{n+1}=F(\theta_n).
</math>
Once the initial direction is fixed, the whole sequence is fixed. The system retains a perfect memory of its initial orientation.
 
In the disordered case, the transfer matrix varies from step to step, and the angular dynamics becomes
<math display="block">
\theta_{n+1}=F(\theta_n,T_n).
</math>
The direction is then continuously reshuffled and progressively loses memory of its initial value.
 
More importantly, there is no finite set of directions that is invariant under all transfer matrices. As a consequence, the dynamics does not get trapped into special directions and effectively explores the projective space.
 
This absence of invariant directions is what is meant by '''mixing of directions'''.
 
=== Consequence ===
 
Under these conditions, the norm of the product grows exponentially with probability one:
<math display="block">
\lim_{n\to\infty} \frac{1}{n} \log \|\Pi_n\| = \gamma > 0,
</math>
where <math>\gamma</math> is the Lyapunov exponent.
 
This result is the matrix analogue of the fact that the logarithm of a product of independent random variables becomes self-averaging (exercise 15).
 
== Verification of Furstenberg's hypotheses for the Anderson model ==
 
We now check explicitly that the transfer matrices of the one-dimensional Anderson model satisfy the two conditions stated above.
 
The transfer matrices are
<math display="block">
T_n =
\begin{pmatrix}
V_n-\epsilon & -1 \\
1 & 0
1 & 0
\end{bmatrix}  \begin{bmatrix}
\end{pmatrix}.
V_{n-1} -\epsilon & -1 \\
</math>
1 & 0
 
\end{bmatrix} \begin{bmatrix}
=== (i) Stretching ===
\psi_{n-1} \\
 
\psi_{n-2}
We first verify that the matrices expand at least one direction with nonzero probability.
\end{bmatrix}
 
</math></center>
For a generic <math>2\times2</math> matrix, the maximal stretching factor is controlled by the largest eigenvalue of <math>T_n^T T_n</math>. Here
It is useful to introduce the transfer matrix and their product
<math display="block">
<center> <math>
T_n^T T_n =
T_n =\begin{bmatrix}
\begin{pmatrix}
V_n -\epsilon & -1 \\
(V_n-\epsilon)^2+1 & -(V_n-\epsilon) \\
1 & 0
-(V_n-\epsilon) & 1
\end{bmatrix}\quad \Pi_n= T_n \cdot T_{n-1} \cdot\ldots T_1
\end{pmatrix}.
</math></center>
</math>
The Schrodinger equation can  be written as
 
<center> <math>
The trace is
\begin{bmatrix}
<math display="block">
\psi_{n+1} \\
\mathrm{Tr}(T_n^T T_n) = (V_n-\epsilon)^2 + 2,
\psi_{n}
</math>
\end{bmatrix}
and the determinant is
=
<math display="block">
\Pi_n  \begin{bmatrix}
\det(T_n^T T_n) = \det(T_n)^2 = 1.
\psi_{1} \\
</math>
\psi_{0}
 
\end{bmatrix}
Therefore the eigenvalues satisfy
=
<math display="block">
\begin{bmatrix}
\lambda_+ \lambda_- = 1,
\pi_{11} & \pi_{12} \\
</math>
\pi_{21} &  \pi_{22}
and the largest one obeys
\end{bmatrix}  \begin{bmatrix}
<math display="block">
\psi_{1} \\
\lambda_+ \ge 1.
\psi_{0}
</math>
\end{bmatrix}
 
</math></center>
More precisely, as soon as <math>V_n-\epsilon \neq 0</math>, one has
==== Fustenberg Theorem ====
<math display="block">
We define the norm of a 2x2 matrix:
\lambda_+ > 1.
<center> <math>
</math>
\|\Pi_n\|^2 =\frac{\pi_{11}^2+\pi_{21}^2+\pi_{12}^2+\pi_{22}^2}{2}
 
</math></center>
Thus the matrices expand at least one direction except for the fine-tuned case <math>V_n=\epsilon</math>, which has zero probability for a continuous disorder distribution. The stretching condition is therefore satisfied almost surely.
For large N, the Fustenberg theorem ensures the existence of the non-negative Lyapunov  exponent, namely
 
<center> <math>
=== (ii) Mixing of directions ===
\lim_{n\to \infty} \frac{\ln \|\Pi_n\|}{n} = \gamma \ge 0
 
</math></center>
We now verify that the angular dynamics is not confined to a finite set of directions.
In absence of disorder <math> \gamma =0 </math> for <math>\epsilon \in (-2,2)</math>. Generically the Lyapunov is positive, <math> \gamma >0 </math>, and depends on <math>\epsilon </math> and on the distribution of <math>V_i </math>.
 
Define the angle
<math display="block">
\theta_n = \arctan\!\left(\frac{\psi_{n-1}}{\psi_n}\right),
</math>
which represents the direction of the vector <math>(\psi_n,\psi_{n-1})</math>.
 
The transfer matrix induces the map
<math display="block">
\theta_{n+1} =
\arctan\!\left(\frac{1}{V_n-\epsilon-\tan\theta_n}\right).
</math>
 
For fixed <math>\theta_n</math>, this expression depends continuously on the random variable <math>V_n</math>. If the disorder has a continuous distribution (for instance Gaussian), the image of a given angle is not restricted to a finite set.
 
As a consequence, the sequence <math>\theta_n</math> is not confined to a finite set of directions. More precisely, there is no finite collection of directions that is invariant under all transfer matrices.
 
The randomness of <math>V_n</math> continuously reshuffles the direction and prevents the dynamics from being trapped into special directions. This ensures that the projective dynamics effectively explores the angular space.
 
This is what is meant by '''mixing of directions''' in the present context.
 
=== Conclusion ===
 
The transfer matrices of the one-dimensional Anderson model satisfy both conditions:
 
# they expand at least one direction with nonzero probability;
# they do not confine the angular dynamics to a finite set.
 
As a consequence,
<math display="block">
\lim_{n\to\infty} \frac{1}{n}\log \|\Pi_n\| = \gamma > 0,
</math>
so the Lyapunov exponent is positive for generic disorder.
 
---
 
== Localization length ==
 
The transfer-matrix recursion corresponds to fixing the wavefunction at one boundary and propagating it through the system.
 
Typical solutions grow exponentially
 
<math display="block">
|\psi_n|\sim e^{\gamma n}.
</math>
 
However a physical eigenstate must satisfy boundary conditions at both ends of the system. Matching two such solutions leads to exponentially localized eigenstates
 
<math display="block">
|\psi_n|\sim e^{-|n-n_0|/\xi_{\text{loc}}}.
</math>
 
The localization length is
 
<math display="block">
\xi_{\text{loc}}(\epsilon)=\frac{1}{\gamma(\epsilon)}.
</math>


====Consequences====
Thus in one dimension arbitrarily weak disorder localizes all eigenstates. 
This result is consistent with the scaling theory of localization discussed earlier, which predicts that for <math>d\le2</math> disorder inevitably drives the system toward the insulating regime.


<Strong> Localization length</Strong>
---


Together with the norm, also  <math> |\psi_n|^2</math> grows exponentially with n. We can write
== Fluctuations ==
<center>  <math>
\ln |\psi_n|  \sim \gamma n + \gamma_2 \chi \sqrt{n}</math>
</center>
which means that <math> \ln |\psi_n| </math> is performing a random walk with a drift.


Quantities such as


However, our initial goal is a properly normalized eigenstate at energy <math>\epsilon </math>. We need  to switch from  Cauchy, where you set the initial condition, to Dirichelet or vonNeuman, where you set the behaviour at  the two boundaries. The true eigenstate is obtained by matching two "Cauchy" solutions on the half box and imposing the normalization. Hence, we obtain a localized eigenstate  and we can identify
<math>
<center> <math>
|\psi_n|,\quad \|\Pi_n\|,\quad G
\xi_{\text{loc}}(\epsilon)  \equiv  \gamma^{-1}(\epsilon)
</math>
</math>
</center>


show strong sample-to-sample fluctuations, while their logarithm is self-averaging.


<Strong> Fluctuations</Strong>
For instance
 
<math display="block">
\ln|\psi_n|
\sim
\gamma n + O(\sqrt n)
</math>


We expect strong fluctuations on quantites like <math> |\psi_n|, \|\Pi_n\|, G, \ldots </math>, while their logarithm is self averaging.
so that the logarithm of the wavefunction performs a random walk with a positive drift.

Latest revision as of 13:36, 23 March 2026

Goal. We introduce the Anderson model and study the statistical properties of its eigenstates. In one dimension disorder leads to localization of all eigenstates, which can be understood using products of random matrices.

Anderson model (tight-binding model)

We consider non-interacting particles hopping between nearest-neighbour sites of a lattice in the presence of disorder.

The Hamiltonian reads

H=ti,j(cicj+cjci)+iVicici.

The random variables Vi represent on-site disorder. For simplicity we set t=1.

We assume that the disorder variables are independent and identically distributed, uniformly in the interval

Vi(W2,W2).

In one dimension, the single-particle Hamiltonian is represented by the tridiagonal matrix

H=(V11001V21001V3100111VL).

We study the statistical properties of the eigenvalue problem

Hψ=ϵψ,n=1L|ψn|2=1.

In the one-particle sector, this becomes a discrete Schrödinger equation.

---

Density of states

Without disorder the dispersion relation is

ϵ(k)=2cosk,k(π,π).

The energy band is therefore

2<ϵ<2.

The density of states is

ρ(ϵ)=ππdk2πδ(ϵϵ(k))=1π4ϵ2(ϵ(2,2)).

In the presence of disorder the density of states broadens and becomes sample dependent.

Transfer matrices

The discrete Schrödinger equation reads

ψn+1ψn1+Vnψn=ϵψn.

It can be rewritten as

(ψn+1ψn)=Tn(ψnψn1)

with

Tn=(Vnϵ110).

Iterating gives

(ψn+1ψn)=Πn(ψ1ψ0),Πn=TnTn1T1.

Thus the wavefunction is controlled by a product of random matrices.

Remark

The transfer matrix formulation rewrites the problem as a recursion, starting from initial data and propagating the solution along the chain.

This corresponds to a Cauchy problem rather than a boundary value problem. Nevertheless, the growth properties of the solutions contain direct information about the nature of the eigenstates: exponential growth is associated with localization, while bounded solutions correspond to extended states.

Furstenberg theorem (physical formulation)

The exponential growth of the product of random matrices Πn=TnTn1T1 can be understood as a matrix analogue of the law of large numbers. A set of simple sufficient conditions for this behavior is provided by a theorem of Harry Furstenberg.

To quantify this growth, we introduce the norm of a matrix A as A2=a112+a122+a212+a2222.

In physical terms, the mechanism behind the theorem relies on two key ingredients:

(i) Stretching

The transfer matrices must be able, with nonzero probability, to expand vectors.

More precisely, there exist realizations of Tn such that for some direction v, Tnv=σmax(Tn)v,σmax(Tn)>1.

Equivalently, σmax2(Tn) is the largest eigenvalue of TnTTn.

This ensures that typical products contain episodes of exponential amplification.

(ii) Mixing of directions

At each step, the transfer matrix changes the direction of the vector (ψn,ψn1).

In the clean case, the same matrix is applied at every step, so the angular dynamics is deterministic and can be written as θn+1=F(θn). Once the initial direction is fixed, the whole sequence is fixed. The system retains a perfect memory of its initial orientation.

In the disordered case, the transfer matrix varies from step to step, and the angular dynamics becomes θn+1=F(θn,Tn). The direction is then continuously reshuffled and progressively loses memory of its initial value.

More importantly, there is no finite set of directions that is invariant under all transfer matrices. As a consequence, the dynamics does not get trapped into special directions and effectively explores the projective space.

This absence of invariant directions is what is meant by mixing of directions.

Consequence

Under these conditions, the norm of the product grows exponentially with probability one: limn1nlogΠn=γ>0, where γ is the Lyapunov exponent.

This result is the matrix analogue of the fact that the logarithm of a product of independent random variables becomes self-averaging (exercise 15).

Verification of Furstenberg's hypotheses for the Anderson model

We now check explicitly that the transfer matrices of the one-dimensional Anderson model satisfy the two conditions stated above.

The transfer matrices are Tn=(Vnϵ110).

(i) Stretching

We first verify that the matrices expand at least one direction with nonzero probability.

For a generic 2×2 matrix, the maximal stretching factor is controlled by the largest eigenvalue of TnTTn. Here TnTTn=((Vnϵ)2+1(Vnϵ)(Vnϵ)1).

The trace is Tr(TnTTn)=(Vnϵ)2+2, and the determinant is det(TnTTn)=det(Tn)2=1.

Therefore the eigenvalues satisfy λ+λ=1, and the largest one obeys λ+1.

More precisely, as soon as Vnϵ0, one has λ+>1.

Thus the matrices expand at least one direction except for the fine-tuned case Vn=ϵ, which has zero probability for a continuous disorder distribution. The stretching condition is therefore satisfied almost surely.

(ii) Mixing of directions

We now verify that the angular dynamics is not confined to a finite set of directions.

Define the angle θn=arctan(ψn1ψn), which represents the direction of the vector (ψn,ψn1).

The transfer matrix induces the map θn+1=arctan(1Vnϵtanθn).

For fixed θn, this expression depends continuously on the random variable Vn. If the disorder has a continuous distribution (for instance Gaussian), the image of a given angle is not restricted to a finite set.

As a consequence, the sequence θn is not confined to a finite set of directions. More precisely, there is no finite collection of directions that is invariant under all transfer matrices.

The randomness of Vn continuously reshuffles the direction and prevents the dynamics from being trapped into special directions. This ensures that the projective dynamics effectively explores the angular space.

This is what is meant by mixing of directions in the present context.

Conclusion

The transfer matrices of the one-dimensional Anderson model satisfy both conditions:

  1. they expand at least one direction with nonzero probability;
  2. they do not confine the angular dynamics to a finite set.

As a consequence, limn1nlogΠn=γ>0, so the Lyapunov exponent is positive for generic disorder.

---

Localization length

The transfer-matrix recursion corresponds to fixing the wavefunction at one boundary and propagating it through the system.

Typical solutions grow exponentially

|ψn|eγn.

However a physical eigenstate must satisfy boundary conditions at both ends of the system. Matching two such solutions leads to exponentially localized eigenstates

|ψn|e|nn0|/ξloc.

The localization length is

ξloc(ϵ)=1γ(ϵ).

Thus in one dimension arbitrarily weak disorder localizes all eigenstates. This result is consistent with the scaling theory of localization discussed earlier, which predicts that for d2 disorder inevitably drives the system toward the insulating regime.

---

Fluctuations

Quantities such as

|ψn|,Πn,G

show strong sample-to-sample fluctuations, while their logarithm is self-averaging.

For instance

ln|ψn|γn+O(n)

so that the logarithm of the wavefunction performs a random walk with a positive drift.