6

A linear system of equations has the form $Ax = b$, where a matrix $A$ and a vector $b$ are given, and I wish to find a solution vector $x$. Suppose that the system $Ax = b$ has no feasible solution. Then I wish to find a solution vector $x$ such that $Ax$ is “as close as possible” to $b$. Of course, this notion is not clearly defined. Is there an LP formulation of this problem?

David Ketcheson
  • 16,522
  • 4
  • 54
  • 105
sara
  • 311
  • 2
  • 4

4 Answers4

6

I assume what you're looking for is some $x$ such that $$\big\|Ax-b\big\|_2$$ is minimal. If this is the case, then what you need is a pseudoinverse of $A$ or the singular value decomposition of $A$ such that $U \Sigma V^\mathsf{T} = A$. The least-squares solution is then given by $$x = V\Sigma^+U^\mathsf{T}b$$ where $\Sigma^+$ is the diagonal matrix $\Sigma$ with all its non-zero entries inverted.

In Matlab or Octave, you could do this as follows:

[U,S,V] = svd(A);
ind = abs(S) > eps*S(1,1);
S( ind ) = 1./S( ind );
x = V*S*U'*b;
Geoff Oxberry
  • 30,394
  • 9
  • 64
  • 127
Pedro
  • 9,573
  • 1
  • 36
  • 45
  • Just re-read the question and noticed that it's an LP problem. So I guess my suggestion isn't that useful after all... – Pedro Jan 24 '12 at 13:56
2

You would need a more formal describtion of "as close as possible". You could, for example, count the "amount of violation":

$\min z_1 + z_2 $

$Ax \le b + z_1$

$Ax \ge b - z_2$

$z_1, z_2 \ge 0$

Dan
  • 3,355
  • 3
  • 21
  • 47
Martin
  • 221
  • 1
  • 2
2

I should mention that you can approach this problem by using an iterative method to minimize $\|Ax-b\|^2$ such as the Landweber method or (faster and better) the method of conjugate gradients. This is probably faster than using the pseudoinverse (depending on implementation and many other details).

Dirk
  • 1,738
  • 10
  • 22
  • I advice against using the Conjugate Gradient in that case. You can use the Conjugate Residual method instead for these kind of systems. CG will blow up as soon as the inapproximable part of the residuals dominates the overall residual norm. – shuhalo Jan 25 '12 at 00:04
2

Pedro and Dirk's solutions are best if you want to minimize the $\ell_{2}$-norm of $Ax - b$. To solve it as an LP, you would minimize the $\ell_{1}$-norm of $Ax-b$, using a standard reformulation; I think this approach is what Martin was trying to express.

Take

$$\begin{alignat}{1} \min_{\mathbf{x}} & \quad\big\|Ax - b\big\|_{1}, \end{alignat}$$

and express it using sum notation:

$$\begin{alignat}{1} \min_{\mathbf{x}} & \quad\sum_{i}\Big|\sum_{j}a_{ij}x_{j} - b_{i}\Big|, \end{alignat}$$

Then reformulate:

$$\begin{alignat}{1} \min_{\mathbf{z}\geq\mathbf{0}, \mathbf{x}} & \quad\mathbf{1}^{T}\mathbf{z}, \\ \textrm{s.t.} & \quad z_{i} \geq \sum_{j}a_{ij}x_{j}-b_{i}, \quad \forall{i}, \\ & \quad z_{i} \geq b_{i} - \sum_{j}a_{ij}x_{j}, \quad \forall{i}. \end{alignat}$$

Geoff Oxberry
  • 30,394
  • 9
  • 64
  • 127