We can amplify and combine @Silva's and @Andrew Moylan's points, which were that the integrand was a polynomial and that the Gauss-Kronrod rule performed better than the multidimensional rule.
We can use the Cartesian product of any interpolatory rules such as "ClenshawCurtisRule" or "GaussKronrodRule", provided we use a sufficiently high order that the integration rule and the embedded error rule integrate the polynomial exactly. According to the docs, for a setting of "Points" -> n, the error rule for "GaussKronrodRule" will be exact for on a polynomial of degree up to 2n - 1 and for "ClenshawCurtisRule" will be exact on a polynomial of degree upto n (but they meant up to degree n or n - 1, according as n is odd or even resp.). The polynomial f[s, t] is of degree {6, 6}, so both the s and t integrals can use the same setting for "Points".
The principal issue, already pointed out by Andrew Moylan, is that zero cannot be approximated to any Precision, and so you have to set a finite AccuracyGoal -> acc, which indicates that a numerical result closer to zero than 10^-acc will be acceptable. More precisely, it means that the error between the integration rule and the embedded rule is to be less than 10^-acc + i0 * 10^-prec, where i0 is the result of the integration rule and prec is the PrecisionGoal.
With all that in mind and sufficient WorkingPrecision, any level of accuracy is obtainable (within practical limits). For instance, to get closer than 10^-100:
Block[{acc = 100, deg = 6, res},
(res = NIntegrate[function[s, t], {t, 0, 1/2}, {s, 0, 1/2},
Method -> {"ClenshawCurtisRule", "Points" -> deg + 1},
MaxRecursion -> 0, AccuracyGoal -> acc,
WorkingPrecision -> acc]) // AbsoluteTiming // Print;
Chop[res, 10^-acc]
]
(*
{0.00845, -6.4369978618316750815140837884620002197320415488373585727...3*10^-114}
0
*)
Block[{acc = 100, deg = 6, res},
(res = NIntegrate[function[s, t], {t, 0, 1/2}, {s, 0, 1/2},
Method -> {"GaussKronrodRule",
"Points" -> Ceiling[(deg + 1)/2]}, MaxRecursion -> 0,
AccuracyGoal -> acc, WorkingPrecision -> acc]) //
AbsoluteTiming // Print;
Chop[res, 10^-acc]
]
(*
{0.006585, 3.0164732651049401161481904591662713102211101606252755851...8*10^-113}
0
*)
To use MachinePrecision, one has to account for rounding error and subtract a few digits off the AccuracyGoal.
NIntegrate[function[s, t], {t, 0, 1/2}, {s, 0, 1/2},
Method -> {"ClenshawCurtisRule", "Points" -> 7}, MaxRecursion -> 0,
AccuracyGoal -> MachinePrecision - 2.7,
WorkingPrecision -> MachinePrecision] // AbsoluteTiming
(* {0.003191, -1.55431*10^-14} *)
NIntegrate[function[s, t], {t, 0, 1/2}, {s, 0, 1/2},
Method -> {"GaussKronrodRule", "Points" -> 4}, MaxRecursion -> 0,
AccuracyGoal -> MachinePrecision - 2.7,
WorkingPrecision -> MachinePrecision] // AbsoluteTiming
(* {0.003304, -2.45137*10^-13} *)
One can also use "NewtonCotesRule" and "LobattoKronrodRule". Note also that MaxRecursion -> 0 means NIntegrate makes just a single application of the rule without recursive subdivision. @leibs was wondering whether classical Gauss quadrature could be done this way, which it can by using "GaussBerntsenEspelidRule".
NIntegrate[function[s, t], {t, 0, 1/2}, {s, 0, 1/2}, Method -> "DoubleExponential"]– leibs Jan 27 '14 at 20:11AccuracyGoal->5it is 1000 times faster and there are no warning messages – andre314 Jan 27 '14 at 20:25Integratevery efficiently. – Silvia Jan 27 '14 at 20:46