What you have is a MultinormalDistribution. The quadratic and linear forms in the exponential can be rewritten in terms of $-\frac12(\vec{x}-\vec{\mu})^\top\Sigma^{-1}(\vec{x}-\vec{\mu})$ where $\vec{\mu}$ represents the mean and $\Sigma$ the covariance matrix, see the documentation.
With this, you can do integrals of the type given in the question by invoking Expectation, as in this example:
Expectation[
x^2 y^3, {x, y} \[Distributed]
MultinormalDistribution[{μ1, μ2},
{{σ1^2, ρ σ1 σ2},
{ρ σ1 σ2, σ2^2}}]]
The result is:
$\text{$\mu $1}^2 \text{$\mu $2}^3+\text{$\mu $2}^3 \text{$\sigma $1}^2+6 \text{$\mu $1} \text{$\mu $2}^2 \rho \text{$\sigma $1} \text{$\sigma
$2}+3 \text{$\mu $1}^2 \text{$\mu $2} \text{$\sigma $2}^2+3 \text{$\mu $2} \text{$\sigma $1}^2 \text{$\sigma $2}^2+6 \text{$\mu $2} \rho ^2 \text{$\sigma
$1}^2 \text{$\sigma $2}^2+6 \text{$\mu $1} \rho \text{$\sigma $1} \text{$\sigma $2}^3$
Edit
Regarding the normalization prefactor mentioned in Sjoerd's comment, you can use the fact that for any dimension $n$
$\iint\exp(-\frac{1}{2}\vec{z}^\top \Sigma^{-1}\vec{z})\mathrm dz^n = (2\pi)^{n/2}\sqrt{\det(\Sigma)}$
Hopefully these hints will be enough for you to fill in the missing linear-algebra steps to make the connection to your given matrix matrix.
Edit 2
In response to the comment by chris, for polynomials as prefactors one can also use the slightly simpler but equivalent form
Moment[
MultinormalDistribution[{μ1, μ2},
{{σ1^2, ρ σ1 σ2},
{ρ σ1 σ2, σ2^2}}],
{2, 3}]
This is the same example as above, with the powers of x and y appearing in the second argument. See the documentation for Moment.
The difference between Moment and Expectation is that Moment is restricted to the expectation values of polynomials.
Edit 3
Before going on with the symbolic manipulations that I assumed are desired here, let me also point out that you can do your integrals pretty straightforwardly if your integrand contains no symbolic parameters. Then you just need to do a numerical integral by replacing Integrate with NIntegrate.
But now back to the symbolic part:
A follow-up question arose how to complete the square in the exponential to get to the standard form of the multinormal distribution, starting from a form like this:
$$\exp(\,\vec{x}^\top A\vec{x}+\vec{v}^\top\vec{x})$$
The matrix $A$ in the exponential is symmetric, $A^\top=A$, and positive definite. Therefore $A$ is invertible, and the inverse is symmetric,
$$\left(A^{-1}\right)^\top=A^{-1}$$
With this, you can verify
$$\left(\vec{x}+\frac{1}{2}A^{-1}\vec{v}\right)^\top A\left(\vec{x}+\frac{1}{2}A^{-1}\vec{v}\right)=\vec{x}^\top A\vec{x}+\vec{v}^\top\vec{x}+\frac{1}{4}\vec{v}^\top A^{-1}\vec{v} $$
by directly multiplying out the factors on the left. Therefore,
$$\exp(\vec{x}^\top A\vec{x}+\vec{v}^\top \vec{x})=\exp(\left(\vec{x}-\vec{\mu}\right)^\top A\left(\vec{x}-\vec{\mu}\right)-\frac{1}{4}\vec{v}^\top A^{-1}\vec{v})$$
where
$$\vec{\mu}\equiv-\frac{1}{2}A^{-1}\vec{v} $$
Compare this to the standard form of the Gaussian integral, and you see that in the notation of Mathematica's documentation
$$A \equiv -\frac{1}{2} \Sigma^{-1}$$
and our integral differs from the standard Gaussian one by a factor
$$\exp(-\frac{1}{4}\vec{v}^\top A^{-1}\vec{v})$$
Now we have all the pieces that are needed, except that you still have to calculate the inverse matrix $A^{-1} \equiv -2\Sigma$, using
Inverse[mat]
if I go back to your original notation where the matrix $A$ is called mat.
Edit 4:
In view of the other answers, I put together the above steps in a module so that my approach can be compared more easily to the alternatives. The result is quite compact and is not significantly slower than the fastest alternative (by @ybeltukov):
gaussMoment[fPre_, fExp_, vars_] :=
Module[{coeff, dist, ai, μ, norm},
coeff = CoefficientArrays[fExp, vars, "Symmetric" -> True];
ai = Inverse[2 coeff[[3]]];
μ = -ai.coeff[[2]];
dist = MultinormalDistribution[μ, -ai];
norm = 1/PDF[dist, vars] /. Thread[vars -> μ];
Simplify[
norm Exp[1/2 coeff[[2]].μ + coeff[[1]]] Distribute@
Expectation[fPre, vars \[Distributed] dist]]]
The normalization factor can be easily obtained from the PDF. I used the same approach as @ybeltukov to extract the matrix $A$ from the exponent, except that I added a factor of $2$ at that stage to prevent that factor from popping up twice at later points.
Here are some tests:
RepeatedTiming[
gaussMoment[(x^2 + x^4 + x^6), -(x - 1)^2, {x}]]
$$\left\{0.0023,\frac{223 \sqrt{\pi }}{8}\right\}$$
RepeatedTiming[
gaussMoment[(x^2 + x y) , -(x - a1)^2 - (y - a2)^2 - (x - y)^2, {x, y}]]
$$\left\{0.0014,\frac{\pi \left(4 \text{a1}^2+6 \text{a1}
\text{a2}+2 \text{a2}^2+3\right) e^{-\frac{1}{3}
(\text{a1}-\text{a2})^2}}{6 \sqrt{3}}\right\}$$
This is about four orders of magnitude faster than doing the integrals using plain Integrate.
One other big advantage (in addition to its simplicity and speed) is that it can deal with non-polynomial prefactors. Here is an example (it takes longer to run, but the other methods cannot do it at all):
gaussMoment[
Sin[2 Pi x] Cos[Pi x] , -(x - a1)^2 - (y - a2)^2 - (x -
y)^2, {x, y}]
$$\frac{i \pi e^{-\text{a1}^2-\frac{\text{a2}^2}{2}}
\left(e^{\frac{1}{6} (2 \text{a1}+\text{a2}-i \pi
)^2}-e^{\frac{1}{6} (2 \text{a1}+\text{a2}+i \pi
)^2}+e^{\frac{1}{6} (2 \text{a1}+\text{a2}-3 i \pi
)^2}-e^{\frac{1}{6} (2 \text{a1}+\text{a2}+3 i \pi
)^2}\right)}{4 \sqrt{3}}$$
SparseArray[]. – J. M.'s missing motivation Jun 16 '12 at 01:49