2

I have a time-series observation dataset that has been distorted. I want to recover the best approximation of the original signal as possible. Disclaimer:: I know only the basics of linear algebra, so please bear with me.

I have a good model of the distortion, represented by a square matrix. In theory, if I can find an inverse of the transformation, I can recover the original signal. However, the model of the distortion as a matrix is ill-conditioned (i.e. almost singular).

Is it possible to take this matrix model, and generate an invertible matrix that is an approximation of the original matrix?

Justin G
  • 31
  • 1

2 Answers2

1

The usual way to approach inverting a matrix that is rank deficient is to use a generalized inverse.

This means if your matrix equation is: $$ y = \mathbf{A}x $$ where $\mathbf{A}$ is an $n \times m$ matrix, then you can use, for example, the Moore-Penrose pseudo-inverse, $\mathbf{A}^{+}$: $$ \mathbf{A}^{+} = (\mathbf{A}^*\mathbf{A})^{-1} \mathbf{A}^* $$ if a left inverse is required or $$ \mathbf{A}^{+} = \mathbf{A}^*(\mathbf{A}\mathbf{A}^*)^{-1} $$ for the right inverse.

If $n=m$ then either be used.

However, as per the link above, there are many different possibilities for selecting an inverse in the rank-deficient / non-square case because the system of equations is underdetermined.

As per @MBaz's comment, you can calculate this using the singular value decomposition of $\mathbf{A}$.

Peter K.
  • 25,714
  • 9
  • 46
  • 91
  • Yes, I calculated the Moore-Penrose pseudo-inverse with Mathematica. However, the results were unsatisfactory, i.e. the pseudo inverse was not very good at recovering the original signal. – Justin G Jun 27 '16 at 21:02
0

Simple Answer

A simple way to solve this is by using diagonal loading (see this answer for a related example). If your square matrix is $R$, then instead of inverting $R$, invert $R + \sigma I$, where $\sigma$ is a small value.

Why This Helps

An ill-conditioned matrix (as you probably know) has some near-zero eigenvalues. The condition number of the matrix is the ratio of its largest and smallest eigenvalues, so you can see why you would run into problems with near-zero eigenvalues. Diagonal loading increases all the eigenvalues slightly, moving the smallest ones away from zero and improving the condition number of the matrix.

Gillespie
  • 1,767
  • 4
  • 26
  • The problem with this method is that it introduces a basis-dependent bias which is hard to reason about. I wouldn't recommend it. – Jazzmaniac Feb 29 '24 at 15:09
  • It's not perfect, but if computation time is part of the equation, it is less demanding than SVD, for example. It is therefore used in some practical cases. – Gillespie Feb 29 '24 at 16:46