The numerical methods for finding (the largest) eigenvalues and (the largest) Lyapunov exponents (LEs) look similar.
The power method is to use the matrix $B$ repetitively to grow a vector $z$, and the expansion/shrinking rate of the vector is the largest eigenvalue: $\frac {u^T B^{m+1}z}{u^T B^{m}z} \approx \lambda_1$, where $u$ is a somehow arbitrarily chosen vector.
The largest LE is computed similarly by finding the growth rate of a vector under the transform of certain Jacobian matrices $D_t$.
Q1:
Is QR decomposition similar to the power method?
i.e. is QR as follows: when we use QR to find the eigenvalues of a matrix $B$, we let $Q_{s+1}=B Q_s$, where $Q_s=[ q_1 \dots q_n]$ and $q_i$'s' are tagent vectors which form a basis for the tangent space;
then we compute $Q_m = B^m Q_0$, and decompose $Q_m$ with $Q_m = QR$ (where $Q$'s columns are unit vectors); and then the diagonal entries of $R$ will be the growth rates of tangent vectors under the transformation of $B$ and therefore the eigenvalues? $^{1}$
What confuses me is that QR seems to be more complicated than is stated above, and involves Hessenburg matrices and Householder reflections. I am not sure how the two play a role in QR.
Q2:
If my understanding of QR 1 is correct, then I am confused by the way we use QR to compute LEs.
Since LEs are the eigenvalues of $\Lambda=(T_t^T T_t)^{1/2t}$, and $T_t = D_{t-1}\dots D_0$, it seems we should compute the eigenvalues of $D_{t-1},\dots, D_0$ respectively and then multiply them; i.e. we need to use QR to compute $Q_{m,t}=(D_t)^m Q_0$ for each $t$ (i.e. we use the same matrix $D_t$ to transform the tangents $Q_0$ repetitively), then $Q_{m,t}=Q_t R_t$, and the iagonal entries of $R_t$ will be the eigenvalues of $D_t$. Then we multiplies the eigenvalues to get the eigenvalues of $T_t^T T_t$, i.e. $(\prod_t R_{t,ii})^2, \forall i \in \{1,\dots, n\}$.$^2$
But what we actually do is to use different $D_t$ ($D_0,D_1, \dots, D_{t-1}$) to transform the tangents $Q_0$, i.e. $Q_{t}=(D_{t-1})^1 (D_{t-2})^1\dots (D_{0})^1 Q_0$. Why can we do this? Without using $D_t$ with sufficient repetitions, it seems we cannot get the eigenvalues of $D_t$, then how can we get the LEs (which in my eyes are the product of eigenvalues of $D_t$. [2])? $^3$
Q3:
Since (no matter we use [2] or [3] to compute LEs) we use $D_t$ ($D_0,D_1, \dots, D_{t-1}$) where $t$ is somehow arbitrarily defined (it is discrete time, while the actual dynamical system is continuous time), the choices of $t$'s possibly matters. Do the time step (the duration between 0, 1, ...,t-2, t-1) matter for the computed value of LEs? If not (as it should be), why?
$\\$
In a word, the three methods look similar but their relations look as messy as intertwined threads to me.
My main reference is 4.8 The eigenvalue problem, in Conte, Elementary numerical analysis.
Related questions: What is integrating a variational equation?, How to understand the largest Lyapunov exponent?.
Supplemental:
What is the meaning of the (blue) text, which looks important?
