9

I was trying to run test cases for CG and I need to generate:

  • symmetric positive definite matrices
  • of size > 10,000
  • FULL DENSE
  • Using only matrix indices and if necessary 1 vector (Like $A(i,j) = \dfrac{x(i) - x(j)}{(i+j)}$)

  • With condition number less than 1000.

I have tried:

  1. Generating random matrices using A=rand(N,N) and then A'A to make it Sym. PD. [This increases condition number]

  2. Using the vector appraoch as shown but I can't seem to get a function of (x,i,j) which will ensure Sym and PD.

After much experimentation, I came up with:

a(it,jt) = (vec(it)+vec(jt))/((it-1)^2+(jt-1)^2); If $it \neq jt$

a(it,it) = x(it) if $it=jt$

But this is PD till about 500x500.

  1. XLATMR. [With all the grading and scaling, its too difficult to understand. Especially since I cannot understand the underlying linear algebra]

Can someone give me a function in x (vector) and i,j (indices) which meets the above requirements?

Inquest
  • 3,394
  • 3
  • 26
  • 44
  • 1
    Try adding a large number $\alpha$ (on the order of the condition number) to the diagonal entries of your matrix. This is equivalent to adding $\alpha$ to each of your eigenvalues and should improve the condition number. – Aron Ahmadia Mar 28 '12 at 14:10
  • @AronAhmadia, Works brilliantly! Thanks! However, what large number should I add? I tried N itself and it worked till 5000x5000 (just finished simulations), will using a+N*eye(N,N) ensure that it will work for all values beyond 5000? Can you convert your comment into an answer? – Inquest Mar 28 '12 at 17:03

4 Answers4

10

To get a dense positive definite matrix with condition number $c$ cheaply, pick a diagonal matrix $D$ whose diagonal consists of numbers from $[1,c]$ (which will be the eigenvalues), with $1$ and $c$ chosen at least once, and a vector $u$. Then apply a similarity transformation, via Householder transformations, to form the matrix $A:=(I-tuu^T)D(I-tuu^T)$, where $t=2/u^Tu$.

To form this matrix with $O(n^2)$ operations, compute $v:=Du$, $s:=t^2u^Tv/2$, $w:=tv-su$ in $O(n)$ operations and then $A$ as $A=D-uw^T-wu^T$. (If you choose $u$ as the all-one vector, the number of multiplications needed is only $O(n)$.)

Note that the behavior of CG depends a lot on the eigenvalue distribution, which you can easily confirm by varying $D$.

(Adding $\alpha$ to an arbitrary symmetric matrix $A$ needs $\alpha$ larger than the largest eigenvalue. This is hard to compute; so one would have to choose $\alpha=\|A\|$, but then the condition number will generally be very small, not a realistic test case for CG.)

Jack Poulson
  • 7,599
  • 32
  • 40
Arnold Neumaier
  • 11,318
  • 20
  • 47
  • This is perfect! – Inquest Mar 30 '12 at 16:16
  • How should I change the algorithm to provide me matrices when I provide the eigen values? For instance, I want a non-symmetric matrix with eigen values 1,-1,2,-2...50,-50. – Inquest Mar 31 '12 at 09:03
  • 1
    $D=Diag(-50:50)$ in the above recipe gives you these eigenvalues, but with this choice the matrix is now symmetric indefinite, and CG wouldn't solve such a problem. And in your comment you even ask for a nonsymmetric matrix, wehre CG also doesn't apply. Maybe it is best to pose a new question with what precisely you want. – Arnold Neumaier Mar 31 '12 at 16:42
3

Try adding a large number $\alpha$ (on the order of the norm of the matrix) to the diagonal entries of your matrix. This is equivalent to adding $\alpha$ to each of your eigenvalues and should improve the condition number by reducing the gap between the largest and smallest eigenvalues.

Aron Ahmadia
  • 6,951
  • 4
  • 34
  • 54
2

I'm not sure how you would do it with just one vector, but with two random vectors $x$ and $\theta$ of size $N$, you can produce a positive semi-definite matrix via $$ U=\prod_i R_i(\theta_i)\\ A=U\mbox{diag}(\mbox{abs}(x)) U^\ast $$ where $R_i(\cdot)$ is a rotation in the plane of the axes $i$ and $i+1 \mbox{ mod } N$.

If you want to improve the condition number, you can add a fixed positive value to $x$ and rescale if need be.

Deathbreath
  • 1,042
  • 7
  • 20
  • I'm sorry but my math isn't all that great yet. Does $U^*$ denote transpose? What does rotation in the plane mean? Also, storing 2 vectors will be a little expensive but still, this look likes a very interesting way. – Inquest Mar 28 '12 at 17:10
  • $R_i(\theta_i)=\left(\begin{matrix} 1 & 0 & \cdots \ 0 & \ddots \ &&\cos \theta_i & \sin \theta_i \ & & -\sin \theta_i & \cos \theta_i\ & & & &1 \ && & & & \ddots\end{matrix}\right)$ is the rotation matrix. $U^\ast$ denotes the complex conjugate transposed matrix (assuming all is real, it's just the transpose). – Deathbreath Mar 28 '12 at 17:41
2

An entirely different way to do it would be like this: consider a random vector $x$, then $A=xx^T$ is a rank-one matrix with $N-1$ eigenvalues equal to zero and one strictly positive eigenvalue equal to $\|x\|^2$ with eigenvector $x$. It's also symmetric.

To construct a dense SPD matrix, add together many such rank-1 matrices. In other words, if you have $M$ vectors $x_i$ (e.g. random vectors), then $$ A = \sum_{i=1}^M x_i x_i^T $$ is SPD if $M\ge N$ and if the vectors $x_i$ a linearly independent (if they are not linearly independent, then $A$ is positive semidefinite). You can verify if your $M$ vectors (or the first $M\ge N$ vectors you draw from the random process) are linearly independent using Gram-Schmidt successive orthogonalization, but you're also likely to get an SPD matrix if you simply choose $M \gg N$ vectors.

Wolfgang Bangerth
  • 55,373
  • 59
  • 119