1

I have been using the QR-based approach on this link to find the null space of rectangular matrices, and possibly sparse matrices, that emerge as a result of some coupling conditions of different domains such as interface compatibility conditions. I am also aware of this link where some other people suggest some efficient ways to compute the null spaces of dense matrices.

One of these compatiblity matrices, say, A is a matrix A of m by n where m < n.

I could compute the null space of these kinds of rectangular matrices, in reference to the above first link accurately with the following code in MATLAB:

tol_rank = 1e-16;
[ QBm,RBm,EBm ] = qr(Bm);
rank_Bm = nnz(find(abs(diag(RBm))>tol_rank));
Csz = size(RBm,2)-rank_Bm;
R1 = RBm(1:rank_Bm,1:rank_Bm);
R12 = RBm(:,rank_Bm+1:end);
[LR1, UR1] = lu(R1);
X = -(UR1 \ (LR1 \ R12));
Lrm = sparse(EBm)*[X;
                   speye(Csz) ];

where Lrm is the right null space of Bm. And as a result, norm(Bm*Lrm) was on the order of round off for the problems I have encountered so far. But now, due to some different mathmetical transformation, the null spaces calculated with the above code is not as accurate as before. For instance, norm(Bm*Lrm) was on the order of machine eps like 1e-15 for the Bm matrices I was previously using however now with the same code, the accuracy of the null spaces is much lower, namely, norm(Bm*Lrm) is of the order 1e-7. What could be the reason of this change in accuracy for the calculation of the null space? Which other paths can I follow to increase the accuracy of the calculated null space?

Having said that I have also tried the built in SVD of MATLAB and it is also not resulting in an accurate, round-off level, null space. I also had a look at the code on this link also the same problem. So it appears to me that the scaling of the Bm matrices deteriorate in a bad way but I could not understand the reason.

Umut Tabak
  • 131
  • 4
  • Could you please report relative residuals rather than absolute ones? Just to rule out that this is the cause of the problem. – Federico Poloni Dec 05 '23 at 19:26
  • @FedericoPoloni, thank you but what do you exactly mean by relative residuals? – Umut Tabak Dec 06 '23 at 08:35
  • BTW, I also wrote a similar comment on this link where you replied :) https://mathoverflow.net/questions/253995/how-can-one-construct-a-sparse-null-space-basis-using-recursive-lu-decomposition/253997?noredirect=1#comment672418_253997 – Umut Tabak Dec 06 '23 at 08:36
  • A relative residual in this context is norm(Bm*Lrm) / norm(Bm) / norm(Lrm). That's what the theory guarantees to be small. – Federico Poloni Dec 06 '23 at 09:13
  • Thank you for the response, norm(full(Bm*Lrm)) / norm(full(Bm)) / norm(full(Lrm)) results in 4.006491111094881e-17 – Umut Tabak Dec 06 '23 at 09:27

1 Answers1

3

You mention in a comment that the relative residual norm(Bm*Lrm) / norm(Bm) / norm(Lrm) is of the order of machine precision.

So everything is working as intended, it seems. Essentially, the computed residual is large because Bm and/or Lrm contain large elements. Floating point computation ensures small relative errors, not small absolute errors.

Federico Poloni
  • 11,344
  • 1
  • 31
  • 59
  • 1
    It seems to me that I am looking for the reason of the problem in the wrong place. Let me check once more and get back – Umut Tabak Dec 06 '23 at 10:00