Questions tagged [linear-algebra]

Questions on the algorithmic/computational aspects of linear algebra, including the solution of linear systems, least squares problems, eigenproblems, and other such matters.

1141 questions
21
votes
1 answer

Diagonal update of a symmetric positive definite matrix

$A$ is a $n \times n$ symmetric positive definite (SPD) sparse matrix. $G$ is a sparse diagonal matrix. $n$ is large ($n$ >10000) and the number of nonzeros in the $G$ is usually 100 ~ 1000. $A$ has been factorized in the cholesky form as…
luogyong
20
votes
3 answers

Null-space of a rectangular dense matrix

Given a dense matrix $$A \in R^{m \times n}, m >> n; max(m) \approx 100000 $$ what is the best way to find its null-space basis within some tolerance $\epsilon$? Based on that basis can I then say that certain cols are linearly dependent within…
Alexander
  • 1,111
  • 1
  • 8
  • 15
17
votes
2 answers

Stopping criteria for iterative linear solvers applied to nearly singular systems

Consider $Ax=b$ with $A$ nearly singular which means there is an eigenvalue $\lambda_0$ of $A$ that is very small. The usual stop criterion of an iterative method is based on the residual $r_n:=b-Ax_n$ and regards the iterations can stop when…
Hui Zhang
  • 1,319
  • 7
  • 16
11
votes
2 answers

Eigenvalue decomposition of the sum: A (symmetric) + D (diagonal)

Suppose $A$ is a real symmetric matrix and its eigenvalue decomposition $V \Lambda V^T$ is given. It is easy to see what happens with the eigenvalues of the sum $A + cI$ where $c$ is a scalar constant (see this question). Can we draw any conclusion…
Ivan
  • 297
  • 2
  • 7
11
votes
1 answer

How to detect the multiplicity for the eigenvalues?

Suppose A is a general sparse matrix, and I want to compute the eigenvalues. I do not know how to detect the multiplicity for the eigenvalues. As far as I know, for a special case, finding the polynomial roots by companion matrix method, we can…
Willowbrook
  • 601
  • 3
  • 10
10
votes
2 answers

Solving a linear system with matrix arguments

We're all familiar with the many computational methods to solve the standard linear system $$ Ax=b. $$However, I'm curious if there are any "standard" computational methods for solving a more general (finite dimensional) linear system of the form…
icurays1
  • 423
  • 2
  • 9
10
votes
1 answer

How to find the interior eigenvalues by krylov subspace method?

I am wondering how to find the eigenvalues of some sparse matrix in given interval [a, b] by iterative method. To my personal understanding, it is more obvious to use Krylov subspace method to find the extreme eigenvalues rather than the interior…
Willowbrook
  • 601
  • 3
  • 10
9
votes
1 answer

Nested dissection on regular grid

When solving sparse linear systems using direct factorization methods, the ordering strategy used significantly impacts the fill-in factor of non-zero elements in the factors. One such ordering strategy is nested dissection. I am wondering if it is…
Victor Liu
  • 4,480
  • 18
  • 28
9
votes
1 answer

Rank structure in the Schur complement

I am doing research on the structure in the Schur complements and find an interesting phenomenon: Suppose that A is from 5--pt laplacian. If I use nested dissection ordering and multifrontal method to compute the LU factorization and then check the…
Willowbrook
  • 601
  • 3
  • 10
9
votes
4 answers

Generating Symmetric Positive Definite Matrices using indices

I was trying to run test cases for CG and I need to generate: symmetric positive definite matrices of size > 10,000 FULL DENSE Using only matrix indices and if necessary 1 vector (Like $A(i,j) = \dfrac{x(i) - x(j)}{(i+j)}$) With condition number…
Inquest
  • 3,394
  • 3
  • 26
  • 44
8
votes
1 answer

Schur's Complement and Inverse of Block Matrices

Assume that we are given a block matrix of the form: $$ M = \left[ \begin{array}{cc} A & b \\ b^T & c \\ \end{array} \right] $$ where $b$ is a column vector. and $c$ is a scalar. Schur's complement of $A$ in $M$ is given by: $$ s = c - b^T A^{-1}…
Mohammad Fawaz
  • 523
  • 3
  • 9
8
votes
2 answers

Gauss-Seidel, SOR in practice?

When I learned about SOR, it was mostly given as one of the first examples of iterative methods, and then later the iterative methods that I would end up using would be Krylov subspace methods. Are any of the iterative methods like Gauss-Seidel and…
Kirill
  • 11,438
  • 2
  • 27
  • 51
7
votes
1 answer

Eigenvalues of a $d\times d$ diagonal+rank1 matrix in $O(d)$ time?

Suppose $h$ is a vector of $d$ positive numbers adding up to 1. I'm looking for a $O(d)$ algorithm to estimate eigenvalues of the following diagonal + rank1 matrix: $$A=2\operatorname{diag}(h)-hh^T$$ Empirically it appears that $2h$ gives a good…
Yaroslav Bulatov
  • 2,655
  • 11
  • 23
7
votes
1 answer

Which preconditioners make Richardson iteration convergent?

Suppose we solve an $m\times n$ full-rank system of equations $Ax=b$ by iterating the following for a small enough $\mu>0$ $$x=x+\mu B(b-Ax)$$ Is there a nice description of kinds of $B$ which make this iteration convergent? For instance, for $n=1$,…
Yaroslav Bulatov
  • 2,655
  • 11
  • 23
7
votes
0 answers

Choice between using Moore-Penrose inverse and G2 inverse

Moore-Penrose inverse for an arbitrary matrix $X\in \mathbb{R}^{n \times p}$ is defined by a matrix $X^\dagger$ satisfying all of the Moore-Penrose conditions, namely \begin{align} (1) \;\;\;& XX^\dagger X=X \\ (2) \;\;\;& X^\dagger X X^\dagger =…
waic
  • 173
  • 7
1
2 3 4 5