7

Is there good approximation for largest (in magnitude) eigenvalue for discrete Laplacian ($\nabla^2$) obtained from nonuniform structured grid (like that)?

Of course, one can always use general methods such as Lanczos or Power Iteration, but I am interested in cheaper ways, possibly less accurate.

I found exact expressions for eigenvalues and eigenvectors of Laplacian in case of an uniform grid in "Finite Difference Methods for Ordinary and Partial Differential Equations", p.21 Obviously these formulas can't be extended for nonuniform grid case.

I've tried Gerschgorin disks theorem. Downloaded this matlab script and created 5-point 2D Laplacian for uniform grid using following lines:

m=10;
h=10;
I = speye(m);
e = ones(m,1);
T = spdiags([e -4*e e],[-1 0 1],m,m);
S = spdiags([e e],[-1 1],m,m);
A = (kron(I,T) + kron(S,I)) / h^2;

Result looks like that:

Gershgorin Discs Plot for Laplacian on uniform grid

Bounds well enough.

Alexander
  • 1,111
  • 1
  • 8
  • 15
  • What about Gerschgorin disks? Should be very cheap. – shuhalo Mar 30 '12 at 10:55
  • @Martin This will be poor estimation in case of Laplacian. – Alexander Mar 30 '12 at 13:00
  • on which domain? Even for uniform grids, the exact spectrum is known only for few domains. – Arnold Neumaier Mar 30 '12 at 13:27
  • @ArnoldNeumaier The domain is rectangular. – Alexander Mar 30 '12 at 13:36
  • @Alexander: If you have a rectangular domain, why do you need the unstructured grid? – Paul Mar 30 '12 at 14:26
  • @Paul: As far as I understand grid can both structured and non-uniform(e.g. http://upload.wikimedia.org/wikipedia/commons/2/23/Rectilinear_grid.svg)? Maybe I'm wrong in terminology. – Alexander Mar 30 '12 at 14:34
  • @Alexander: You're right... it can be structured or unstructured. I'm just curious if there is an absolute need in your application for an unstructured grid, especially since the domain is ideal for structured grids. – Paul Mar 30 '12 at 14:44
  • Is this question really asking for the largest (highest energy) mode of the discrete Laplacian? This is a nearly meaningless quantity except for stability of stationary iterative methods. Or are you looking for the "fundamental mode" which is the smallest nonzero eigenvalue (the largest eigenvalue of the inverse Laplacian). If you are indeed looking for the largest eigenvalue, what is your intended purpose and how accurate do you want it to be? – Jed Brown Apr 01 '12 at 17:36
  • Does anybody know why this Laplacian matrix is diagonalisable and has only real spectrum? – nullas Jul 23 '12 at 12:00

3 Answers3

3

For a rectangular grid and homogeneous boundary conditions, one knows the exact eigenfunctions $u_k(x)$ of the Laplacian.

Even if you have a nonuniform mesh with nodes $x_l$ ($l\in L$), you can discretize the eigenfunction for the wanted eigenvalue: Evaluate the eigenfunction at your grid points to get a vector $v$ in your discrete representation, with components $v_l=u(x_l)$. Because the function discretized is an eigenvector of the continuous problem, $v$ is a good approximation of the eigenvector $v$ of the discrete matrix $A$.

Then calcuate the corresonding Rayleigh quotient $v^TAv/v^Tv$. The RQ is an $O(\epsilon^2)$ approximation to an eigenvalue if $v$ is an $O(\epsilon)$ approximation to the corresponding eigenvector. Thus you shpuld a fairly good value for the corresponding eigenvalue of the discrete Laplacian, the better the finer the grid.

In other cases where exact eigenfunctions are known, one can do the same.

Arnold Neumaier
  • 11,318
  • 20
  • 47
3

On a uniform mesh, the eigenvalues correspond to functions that are oscillatory. For oscillatory functions of the form $e^{i\vec k \cdot \vec x}$, the eigenvalues are $|\vec k|^2$. On uniform meshes, the highest wave vector that's representable on a mesh satisfies $k_i = \frac{2\pi}{\lambda_i}$ where $\lambda_i=2h_i$ is twice the mesh size in direction $i$. So we get an eigenvalue of $\pi^2\sum_{i=1}^d \frac{1}{h_i^2}$.

This construction is not exactly applicable to non-uniform meshes, but I would expect that the largest eigenvalues are still associated with oscillatory eigenfunctions that are localized around the area where the mesh is smallest. If so, for meshes like the one you show, then the largest eigenvalue would be approximately $\pi^2\sum_{i=1}^d \frac{1}{h_{min,i}^2}$, where $h_{min,i}$ is the minimum mesh size in coordinate direction $i$.

Wolfgang Bangerth
  • 55,373
  • 59
  • 119
  • I don't get why for uniform case you use sum over $h_i$, what are $d$ and $i$ in this case? – Alexander Apr 01 '12 at 11:06
  • @Alexander: d=space dimension. The sum over $i=1...d$ comes from the fact that the Laplace operator is the sum of second derivatives in coordinate direction $i=1...d$. – Wolfgang Bangerth Apr 01 '12 at 12:19
  • Does this formula work only in the limit $h \to 0$? For $h=1$ and $d=2$ it gives $2\pi^2$ which is far from true. – Alexander Apr 01 '12 at 12:45
  • It is only approximately true, yes. I would expect that $h$ needs to be smaller than, say, 1/10th the diameter of the domain. How big is your domain? – Wolfgang Bangerth Apr 01 '12 at 21:48
0

If you are looking for a strictly numerical way to estimate the largest eigenvalue of a symmetric matrix you can use Jacobi rotations. A nice treatment of the algorithm is given in Chap. 8 of Matrix Computations by Golub and van Loan (for non-symmetric eigenvalue problems you basically have to resort to something like a power method which are covered in Chap. 7 of the same book). Essentially the algorithm pushes the "energy" of the matrix onto the main diagonal, causing the symmetric matrix to converge to the diagonal matrix of eigenvalues. Because the matrix is already diagonally dominent the convergence of this method should be relatively fast even though it will cause fill in of the pentadiagonal matrix to a dense matrix. The fill in entries will be small.

The form of the Jacobi rotation allows the application of the rotation to be cheap even to a dense matrix. The rotation is an identity matrix with four off-diagonal entries which are non-zero. You won't have to run the algorithm to full convergence if you just want a descent estimate of the largest eigenvalue. You could run the algorithm a few times until the off-diagonal entries are "small enough" for your purposes, then sweep through the main diagonal of the matrix and pick off the largest value.

Andrew Winters
  • 936
  • 1
  • 5
  • 10