Update
Leaving my old answer below for historical reference, however as of version 11.2.0 (currently available on Wolfram Cloud and soon to be released as a desktop product) the low-level linear algebra functions have been documented, see
http://reference.wolfram.com/language/LowLevelLinearAlgebra/guide/BLASGuide.html
The comments by both Michael E2 and J. M. ♦ are already an excellent answer, so this is just my attempt at summarizing.
Undocumented means just what it says: there need not be any reference pages or usage messages, or any other kind of documentation. There are many undocumented functions and if you follow MSE regularly, you will encounter them often. Using such functionality, however, is not without its caveats.
Sometimes, functions (whether documented or undocumented) are written in top-level (Mathematica, or if you will, Wolfram Language) code, so one can inspect the actual implementation by spelunking. However, that is not the case for functions implemented in C as part of the kernel.
Particularly for the LinearAlgebra`BLAS` interface, the function signatures are kept quite close to the well-established FORTRAN conventions (which is also what MKL adheres to, see the guide for ?gemm) with a few non-surprising adjustments. For instance, consider
xGEMM( TRANSA, TRANSB, M, N, K, ALPHA, A, LDA, B, LDB, BETA, C, LDC )
and the corresponding syntax for LinearAlgebra`BLAS`GEMM which is
GEMM[ transa, transb, alpha, a, b, beta, c ]
where we can see the storage-related parameters such as dimensions and strides are omitted, since the kernel already knows how the matrices are laid out in memory. All other arguments are the same, and even come in the same order.
As an usage example,
a = {{1, 2}, {3, 4}}; b = {{5, 6}, {7, 8}}; c = b; (* c will be overwritten *)
LinearAlgebra`BLAS`GEMM["T", "N", -2, a, b, 1/2, c]; c
(* {{-(99/2), -57}, {-(145/2), -84}} *)
-2 Transpose[a].b + (1/2) b
(* {{-(99/2), -57}, {-(145/2), -84}} *)
Note that for machine precision matrices, Dot will end up calling the corresponding optimized xgemm function from MKL anyway, so I would not expect a big performance difference. It is certainly much more readable and easier to use Dot rather than GEMM for matrix multiplication.
On the topic of BLAS in Mathematica, I would also recommend the 2003 developer conference talk by Zbigniew Leyk, which has some further implementation details and examples.
System`functions. Their usage might change, etc. -- Personally, I think Oleksandr R. showed the way to those who are not adverse to work. Perhaps it's a coincidence, but the documentation for GEMM indicates 7 arguments. Someone who is truly curious would follow that up. – Michael E2 Nov 26 '15 at 02:00LinearAlgebra`BLAS`GEMM["N", "N", 2, a, b, 3., c]-- it doesn't seem that hard to figure out, given the several paradigms of the?gemmfunctions.... – Michael E2 Nov 26 '15 at 02:32LinearAlgebra`LAPACK`*after this? – J. M.'s missing motivation Nov 26 '15 at 04:18*", you can find more undocumented function, like.XSimplex`, what is this? I have no idea. I know many guys (of course including you : ) ) on this site know the usage of many undocumented function, I am curious, how they know it? – matheorem Nov 26 '15 at 17:10