From the documentation, we learn that:
LeastSquares[m,b], when b is a vector, is equivalent to ArgMin[Norm[m.x-b],x]. I am wondering whether, at the level of performances (say, at the level of convergence and speed), is there a reason to prefer one method or the other when we have to deal with large and sparse matrices and vectors (and we are solving the problem numerically). I know that the question is a bit general, but any clue is more than welcome.
UPDATE
Having decided to use LeastSquare I have a further question: is LeastSquare parallelizable? For example, by doing a compiled version of the function, with
CompilationTarget -> "C" , RuntimeAttributes -> {Listable} , Parallelization -> True
Should I expect some significative speed up?
LeastSquaresis very much preferred for large numerical matrices (both sparse and full). It's specifically optimized that. – Sjoerd Smit Jun 20 '20 at 10:08LeastSquaresto look for real solutions only? – Dario Rosa Jun 20 '20 at 11:59LeastSquaresis just solving the normal equations (I think that's the term), which is the more efficient version of multiplying by the pseudoinverse matrix. So there is no way to place a restriction on it. – Daniel Lichtblau Jun 20 '20 at 14:12LeastSquaresavoids the normal equations; those typically have a much larger condition number. More stable alorithms are based on QR-decomposition or SVD. – Henrik Schumacher Jun 20 '20 at 14:17