I have been using the QR-based approach on this link to find the null space of rectangular matrices, and possibly sparse matrices, that emerge as a result of some coupling conditions of different domains such as interface compatibility conditions. I am also aware of this link where some other people suggest some efficient ways to compute the null spaces of dense matrices.
One of these compatiblity matrices, say, A is a matrix A of m by n where m < n.
I could compute the null space of these kinds of rectangular matrices, in reference to the above first link accurately with the following code in MATLAB:
tol_rank = 1e-16;
[ QBm,RBm,EBm ] = qr(Bm);
rank_Bm = nnz(find(abs(diag(RBm))>tol_rank));
Csz = size(RBm,2)-rank_Bm;
R1 = RBm(1:rank_Bm,1:rank_Bm);
R12 = RBm(:,rank_Bm+1:end);
[LR1, UR1] = lu(R1);
X = -(UR1 \ (LR1 \ R12));
Lrm = sparse(EBm)*[X;
speye(Csz) ];
where Lrm is the right null space of Bm. And as a result, norm(Bm*Lrm) was on the order of round off for the problems I have encountered so far. But now, due to some different mathmetical transformation, the null spaces calculated with the above code is not as accurate as before. For instance, norm(Bm*Lrm) was on the order of machine eps like 1e-15 for the Bm matrices I was previously using however now with the same code, the accuracy of the null spaces is much lower, namely, norm(Bm*Lrm) is of the order 1e-7. What could be the reason of this change in accuracy for the calculation of the null space? Which other paths can I follow to increase the accuracy of the calculated null space?
Having said that I have also tried the built in SVD of MATLAB and it is also not resulting in an accurate, round-off level, null space. I also had a look at the code on this link also the same problem. So it appears to me that the scaling of the Bm matrices deteriorate in a bad way but I could not understand the reason.
norm(Bm*Lrm) / norm(Bm) / norm(Lrm). That's what the theory guarantees to be small. – Federico Poloni Dec 06 '23 at 09:13