29

Bug introduced after 5.0, in or before 8.0 and persisting through 13.3.


I notice in the following example that wrong smallest 2 eigenvalues are resulted if calculating from a sparse matrix. But it gives correct result if we

  • calculate the smallest 3,4,... eigenvalues from the sparse matrix
  • calculate any smallest eigenvalues from the corresponding normal matrix

I found many cases with this behavior. Why and any remedy?

In the code below, the 104×104 matrix in the minimal example I found is imported from Pastebin because it exceeds the length limit of a post.

Import["https://pastebin.com/raw/PpDfY3EQ", "Package"];
mysparsemat = mymat;
mymat = Normal[mysparsemat];
m = 2;
Reverse@First[Eigensystem[mymat, -m]]
Reverse@First[Eigensystem[mysparsemat, -m]]
m = 4;
Reverse@First[Eigensystem[mymat, -m]]
Reverse@First[Eigensystem[mysparsemat, -m]]

The result of the above code is

{-0.712477 + 1.02863*10^-16 I, 0.712477 + 1.46577*10^-16 I}

{0.712477 - 1.4429410^-11 I, 0.712656 - 2.1225810^-11 I}

{-0.712477 + 1.0286310^-16 I, 0.712477 + 1.4657710^-16 I, -0.712656 - 1.0557810^-16 I, 0.712656 + 6.4914410^-16 I}

{-0.712477 - 5.1077710^-10 I, 0.712477 - 5.4486310^-12 I, 0.712656 + 3.1019810^-11 I, -0.712656 + 3.6467710^-10 I}

So the resultant wrong 2nd smallest eigenvalue is actually the correct 3rd or 4th smallest eigenvalue. (The eigenvalues should be real and doubly degenerate in absolute value as expected from the original problem's nature).

Update
As you might have noticed in the comments below, a first guess accusing the Arnoldi algorithm is irrelevant. (And Matlab gives the correct result.)


A bug report has been filed for this in the Wolfram community.

xiaohuamao
  • 4,728
  • 18
  • 36
  • Please do not use the [tag:bugs] tag until your observations have been confirmed to be a bug. – J. M.'s missing motivation Mar 30 '18 at 13:43
  • @J. M. Nice edit! Thank you so much! – xiaohuamao Mar 30 '18 at 13:55
  • 3
    I think the degeneracy is trapping the Arnoldi algorithm. Look at what happens for e.g. Eigenvalues[sm, -2, Method -> {"Arnoldi", "Criteria" -> "Magnitude", "StartingVector" -> SparseArray[1 -> 1, 104]}] and Eigenvalues[sm, -2, Method -> {"Arnoldi", "Criteria" -> "Magnitude", "StartingVector" -> SparseArray[52 -> 1, 104]}]. – J. M.'s missing motivation Mar 30 '18 at 14:06
  • @J.M. Thank you. I'm not familiar with the details of the algorithm. Playing with your 2 lines to calculate like the 10 smallest, it looks that, depending on the starting vector, the results of the 2 lines can be either the same or just complementary (the absolute degeneracy). So probably the guess is that the algorithm fails (manages) to find sufficient/proper starting vectors when calculating only 2 (more than 2) smallest eigenvalues. – xiaohuamao Mar 31 '18 at 01:03
  • 2
    As heretical as this will sound: if you have access to MATLAB, try using eigs() on your matrix. At the very least, it's a way to be sure that it's the algorithm itself that is going flat, and not Mathematica's specific implementation (even tho both are using ARPACK under the hood). – J. M.'s missing motivation Mar 31 '18 at 01:11
  • 7
    OK, I asked a friend with MATLAB to run this example. eigs() is perfectly capable of returning the two smallest eigenvalues (in magnitude). I don't know what's up with Mathematica now. – J. M.'s missing motivation Mar 31 '18 at 08:45

2 Answers2

11

It appears that the proximity of the eigenvalues causes Eigensystem with the default parameters to be inaccurate. This can be fixed by increasing the basis size to 30

Import["https://pastebin.com/raw/PpDfY3EQ", "Package"]; 
mysparsemat = mymat; 
mymat = Normal[mysparsemat]; 
m = 2; 
Reverse[First[Eigensystem[mymat, -m]]]
Reverse[First[Eigensystem[mysparsemat, -m, Method -> {"Arnoldi", "BasisSize" -> 30}]]]

which then gives the desired result

{-0.712477 + 1.02679*10^-16 I, 0.712477 + 2.53294*10^-16 I}

{-0.712477 + 3.34996*10^-16 I, 0.712477 + 1.11772*10^-15 I}
Omrie
  • 361
  • 3
  • 4
0

I found a similar issue in a non-numeric caluclation:

sol = C[1] Cos[\[Lambda] x] + C[2] Sin[\[Lambda] x] + C[3] Cosh[\[Lambda] x] + C[4] Sinh[\[Lambda] x];
row1 = CoefficientArrays[sol /. {x -> 0}, {C[1], C[2], C[3], C[4]}][[2]];
row2 = CoefficientArrays[D[sol, x] /. {x -> 0}, {C[1], C[2], C[3], C[4]}][[2]];
row3 = CoefficientArrays[D[sol, x, x] /. {x -> l}, {C[1], C[2], C[3], C[4]}][[2]];
row4 = CoefficientArrays[D[sol, x, x, x] /. {x -> l}, {C[1], C[2], C[3], C[4]}][[2]];

sparsematrix = {row1, row2, row3, row4}; Det[sparsematrix] normalmatrix = Normal@sparsematrix; Det[normalmatrix]// FullSimplify

It gives 0 and 2 \[Lambda]^6 (1 + Cos[l \[Lambda]] Cosh[l \[Lambda]]). The first result is unexpected. It took me a while to figure that out that Det[] does different things and fails when sparse vectors are collected as normalform-matrix. It can be fixed by Det[SparseArray[sparsematrix]]. I consider this a bug. Det[] apparently understands that the matrix is 4 by 4 (leaving out one row in the above example causes Det[] to complain about the matrix not being square). So one assumes that it is handled properly, but the result is just wrong.

Rainer Glüge
  • 856
  • 4
  • 10