LinearSolve[] actually computes a permuted Cholesky decomposition; that is, it performs the decomposition $\mathbf P^\top\mathbf A\mathbf P=\mathbf G^\top\mathbf G$. To extract $\mathbf P$ and $\mathbf G$, we need to use some undocumented properties. Here's a demo:
mat = SparseArray[{Band[{2, 1}] -> -1., Band[{1, 1}] -> 2.,
Band[{1, 2}] -> -1.}, {5, 5}];
ls = LinearSolve[mat, Method -> "Cholesky"];
g = ls["getU"]; (* upper triangular factor *)
perm = ls["getPermutations"][[1]]; (* permutation vector *)
p = SparseArray[MapIndexed[Append[#2, #1] -> 1 &, perm]]; (* permutation matrix *)
p.Transpose[g].g.Transpose[p] == mat (* check! *)
True
Here's a classical example of why permutation matrices are a must in sparse Cholesky decompositions.
Consider the following upper arrowhead matrix:
arr = SparseArray[{{1, j_} | {j_, 1} /; j != 1 -> -1., Band[{1, 1}] -> 3.}, {5, 5}];
ArrayPlot[arr]

Watch what happens after performing a Cholesky decomposition:
ArrayPlot[CholeskyDecomposition[arr]]

Boom, fill-in. Imagine if this had been a $100\,000\times 100\,000$ upper arrowhead matrix!
If, however, we permute arr to a lower arrowhead matrix, like so:
exc = Reverse[IdentityMatrix[5]];
la = exc.arr.Transpose[exc];
ArrayPlot[CholeskyDecomposition[la]]

What a difference a permutation makes!
For matrices with even more complicated sparsity patterns, it is doubtful if you can predict in advance that you won't get any disastrous fill-in if you insist on an unpermuted Cholesky triangle. Thus, all standard sparse Cholesky routines always perform some sort of permutation; though, as with any automatic routine of this sort, the permutation chosen might not be the most optimal, and yet yield something still good enough to work.
For reference, here's how LinearSolve[] does on an upper arrowhead:
lsar = LinearSolve[arr, Method -> "Cholesky"];
g = lsar["getU"];
ArrayPlot[g]

ls["Properties"], then you can achieve many useful results. – xyz Nov 07 '15 at 15:13