8

Without qualification, the term eigenvectors (of a matrix) refers to the column eigenvectors (of a matrix) and can be directly computed with Eigenvectors[]. To get the row eigenvectors, one can invert the transpose of the matrix returned by Eigenvectors[] (or equivalently, the inverse of JordanDecomposition[][[1]]).

This approach is usually fast enough, but sometimes, computing the inverse takes an enormous amount of time, compared to just computing the column eigenvectors. This can happen when the column eigenvectors are partially symbolic, or involve Root[].

Is there a better way to compute the row eigenvectors of a matrix? In particular, is there a way to compute the row eigenvectors as fast as the column eigenvectors?

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
Tyson Williams
  • 1,752
  • 1
  • 14
  • 18
  • 3
    Assuming by "row eigenvector" you mean the left eigenvector, you can calculate this by calculating the regular (right) eigenvector of Transpose[a]. – bill s Nov 12 '13 at 15:36
  • 3
    Just compute directly the eigenvectors of the transpose matrix. – Daniel Lichtblau Nov 12 '13 at 15:36
  • 1
    Or compute the SVD, which gives you both the left and right eigenvectors. – rm -rf Nov 12 '13 at 15:47
  • 1
    @rm-rf I'm not sure. For example, eigenvectors are not orthogonal for non-Hermitian matrices (Hermitian matrices are out of our scope because they have identical set of left and right eigenvectors) but SVD always returns two orthogonal sets. For definiteness one can calculate eigenvectors and SVD of {{1, 1}, {0, 2}}. – ybeltukov Nov 12 '13 at 18:08
  • It should be noted that "one can invert the transpose of the matrix returned by Eigenvectors[]" will fail if the matrix is defective. – J. M.'s missing motivation Mar 23 '18 at 14:31

2 Answers2

12

To compute left eigenvectors ( = "row eigenvectors") you can use

Eigenvectors@Transpose[A]

See also Daniel's answer here.

ybeltukov
  • 43,673
  • 5
  • 108
  • 212
  • I don't think left eigen vectors is same transpose of right eigenvector. What you are saying is true for matrices which are "Hermitian" matrix. – Kartik Chhajed Jan 23 '21 at 04:47
  • @KartikChhajed Please note that Eigenvectors@Transpose[A] is Eigenvectors[Transpose[A]], i.e. eigenvectors of the transposed matrix, not the transpose of the right eigenvectors. – ybeltukov Jan 27 '21 at 10:20
8

Let me provide a wrapper routine for generating the full eigensystem (eigenvalues, and left and right eigenvectors) of a matrix. This uses the undocumented built-in LAPACK routines for doing so:

SetAttributes[makeVecs, HoldAll];
makeVecs[vals_, vecs_, s_, rQ_] := With[{n = Length[vals]}, 
    If[TrueQ[rQ], 
       Do[Switch[Sign[Im[vals[[k]]]], 0, Null,
                 1, vecs[[k]] += s I vecs[[k + 1]],
                 -1, vecs[[k]] = Conjugate[vecs[[k - 1]]]], {k, n}]]]

Options[getEigensystem] = {Mode -> Automatic};

getEigensystem[mat_?SquareMatrixQ, opts : OptionsPattern[]] := 
   Module[{m = mat, chk, ei, ev, lm, lv, n, rm, rQ, rv}, n = Length[mat];
          Switch[OptionValue[Mode],
                 Right | Automatic, {lm, rm} = {"N", "V"},
                 Left, {lm, rm} = {"V", "N"},
                 All, {lm, rm} = {"V", "V"},
                 _, {lm, rm} = {"N", "V"}];
          LinearAlgebra`LAPACK`GEEV[lm, rm, m, ev, ei, lv, rv, chk, rQ];
          If[! TrueQ[chk], Message[getEigensystem::eivec0]; Return[$Failed, Module]];
          If[rQ, ev += I ei];
          Switch[OptionValue[Mode],
                 Right | Automatic, rv = ConjugateTranspose[rv];
                                    makeVecs[ev, rv, 1, rQ]; {ev, rv},
                 Left, lv = ConjugateTranspose[lv];
                       makeVecs[ev, lv, -1, rQ]; {ev, lv},
                 All, {lv, rv} = ConjugateTranspose /@ {lv, rv};
                      makeVecs[ev, rv, 1, rQ]; makeVecs[ev, lv, -1, rQ];
                      {ev, rv, lv},
                 _, rv = ConjugateTranspose[rv];
                    makeVecs[ev, rv, 1, rQ]; {ev, rv}]]

Test:

m = {{1., -2., 1., 9., 6.},
     {8., 5., 3., 7., 7.},
     {-9., -3., 9., 5., -2.},
     {-5., -4., 6., 8., -6.},
     {-3., 9., -2., 6., -3.}};

{vals, rvecs, lvecs} = getEigensystem[m, Mode -> All];

Norm[m.Transpose[rvecs] - Transpose[rvecs].DiagonalMatrix[vals]]
   4.16257*10^-14

Norm[lvecs.m - DiagonalMatrix[vals].lvecs]
   2.73754*10^-14
J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
  • As a bonus, one can then compute the eigenvalue condition numbers as MapThread[1/(Normalize[#1].Normalize[#2]) &, {lvecs, rvecs}]. – J. M.'s missing motivation Nov 15 '19 at 01:00
  • Is there any resource available on the usage of these Lapack functions in Mathematica? Are the function signatures the same as you'd use them in Fortran (In that case one can lookup netlib/lapack)? – Galilean Jan 20 '22 at 12:56
  • @Galilean, unfortunately, these are undocumented, but I've found that their syntax more or less follows the FORTRAN originals, modulo dimension arguments. – J. M.'s missing motivation May 17 '22 at 18:31