5

Stirling's Approximation is given by

$$n! \sim \sqrt {2\pi n} \left ( \frac{n}{e}\right)^n$$

From a substantial improvement of the Stirling formula, we have an elegant approximation given by

$$n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}+\frac{1}{12 e n}\right)^n$$

We can write a simple function that has Stirling's, the improved version versus actual as

  nf[n_] := {Sqrt[2. Pi n] (n/E)^n, Sqrt[2. Pi n] (n/E + 1/(12 E n))^n, 1. n!}

My question, how can we find the percent error between Actual versus the two approximations and make it part of the function specification?

Maybe even plot the three variants.

Moo
  • 3,260
  • 1
  • 12
  • 28
  • 1
    Exp@Normal@Series[LogGamma[n + 1], {n, Infinity, 1}] does a pretty good job, only 2-4 times worse than the paper's formula for $100\le n \le 2500$. – Michael E2 Jun 14 '23 at 19:37
  • @MichaelE2: That is precisely the next place I was going - to compare these to improving series based on Sturling's Series and also on Ramanujan's formula. – Moo Jun 14 '23 at 19:39
  • 2
    I'm surprised the author didn't compare their formula to Normal@Series[Gamma[n + 1], {n, Infinity, 1}] (Stirling series), which seems fairer than comparing it to the zero-order approximation, especially since theirs is still much better. – Michael E2 Jun 14 '23 at 19:47
  • Likely, because that approach didn't align with their goals of saying look at how much better this is! ;-) – Moo Jun 14 '23 at 20:06
  • 2
    But they based their formula on the Stirling series...and if they're going to add a term to Stirling's formula, they should compare their change with an extra term from the standard series. Perhaps I'm sensitive because I've been using the 2-term Stirling series since I was taught calculus in the early 1980s. I expect a 2011 "significant improvement" to be one over the approximation in my toolbag, which, in fact, it turns out to be (!). That fact clarifies its significance. After the first time through, it wasn't clear whether theirs was better than the usual series. It should have been. – Michael E2 Jun 14 '23 at 20:26
  • 3
    As soon as you put machine-precision numbers into your formula, like 2. or 1., your result will always be given as a machine-precision number. This can be an obstacle when looking at very large numbers, asymptotes, etc. as you're trying to do. – Roman Jun 14 '23 at 20:28
  • 2
    And FYI an even better approximation is $\Gamma(x) \approx e^{-x}\sqrt{\frac{2\pi}{x}}\left(x\sqrt{\frac{1}{810x^6} + x\sinh\left(\frac{1}{x}\right)}\right)^x$. WL form: Exp[-x] Sqrt[2π/x] (x Sqrt[1/(810 x^6) + x Sinh[1/x]])^x – Greg Hurst Jun 15 '23 at 11:36
  • @GregHurst: Excellent! May I ask the source? Did you derive it? – Moo Jun 15 '23 at 12:14
  • 1
    I don't remember at this point. But back when I worked on W|A, it was feedback suggested from a user. We ended up placing it in a W|A pod, here for example: https://www.wolframalpha.com/input?i=Gamma%28x%29 – Greg Hurst Jun 16 '23 at 19:08
  • 1
    Oh, looks like the wiki like you posted has sources for this formula in this section: https://en.wikipedia.org/wiki/Stirling%27s_approximation#Versions_suitable_for_calculators – Greg Hurst Jun 16 '23 at 19:09

3 Answers3

8
n1[n_] = Sqrt[2 π n] (n/E)^n;
n2[n_] = Sqrt[2 π n] (n/E + 1/(12 E n))^n;

Series[n1[n] / n!, {n, ∞, 1}]
(*    1 - 1/(12 n) + O[1/n]^2    *)

Series[n2[n] / n!, {n, ∞, 4}]

(*    1 - 1/(1440 n^3) + O[1/n]^5    *)

We see that $n_2$ approximates $n!$ with a relative error that decreases as $n^{-3}$, whereas the error of $n_1$ decreases as $n^{-1}$ (the inverse of the well-known Stirling series $1+\frac{1}{12n}+\frac{1}{288n^2}+\ldots$).

Ramanujan's formula is better:

n3[n_] = Sqrt[π] (n/E)^n (8 n^3 + 4 n^2 + n + 1/30)^(1/6);
Series[n3[n] / n!, {n, ∞, 4}]
(*    1 + 11/(11520 n^4) + O[1/n]^5    *)

@GregHurst's suggestion is better still:

n4[n_] = Exp[-x] Sqrt[2π/x] (x Sqrt[1/(810 x^6) + x Sinh[1/x]])^x /. x -> n + 1;
Series[n4[n] / n!, {n, ∞, 7}]
(*    1 + 163/(340200 n^7) + O[1/n]^8    *)
Roman
  • 47,322
  • 2
  • 55
  • 121
4

I offer another way to compare two approximations of n!. We compare the Mortici formula (from the paper) to the two-term Stirling series (which the paper fails to do):

mortici = Sqrt[2 \[Pi] n] (n/E + 1/(12 E n))^n;
stirling2 = Sqrt[2 \[Pi] n] (n/E)^n (1 + 1/(12 n));
factorial = n!;

Series[( (* takes a few seconds *) mortici - factorial)/(stirling2 - factorial) // Simplify, {n, Infinity, 10}] // Normal // FullSimplify[#, n > 0] &; Series[%, {n, Infinity, 2}]

(* 1/(5 n) + 77/(450 n^2) + O[1/n]^3 *)

{1/(5 n), (mortici - factorial)/(stirling2 - factorial)} /. {{n -> 100.}, {n -> 2500.`30}} // N (* 1/(5n) ratio {{0.002, 0.00201741}, {0.00008, 0.0000800274}} *)

Thus the Mortici formula has an absolute error approximately 1/(5n) times the error of the two-term Stirling formula.


Numerics note: It may be better to use logarithms and LogGamma, since the numbers can get very large. However, the arbitrary-precision numbers seem to work well enough up to n = 2500. In any case, here's the set-up for an logarithmic approach:

logmortici = mortici // Log // FullSimplify[#, n > 0] &;
logstirling2 = stirling2 // Log // FullSimplify[#, n > 0] &;
logfactorial = LogGamma[n + 1];
(* rel/abs errors *)
relmortici = Internal`Expm1[logmortici - logfactorial];
absmortici = relmortici factorial;
relstirling2 = Internal`Expm1[logstirling2 - logfactorial];
absstirling2 = relstirling2 factorial;

The function Internal`Expm1[x] computes Exp[x]-1 accurately when Exp[x] is near 1 (x near zero).

Note also that Exp[2500.] overflows machine-precision and is automatically promoted to arbitrary precision with $MachinePrecision digits (approx. 15.95). However, for the absolute error mortici - factorial for n = 2500., we need at least 19 digits to get a one-digit result.

Michael E2
  • 235,386
  • 17
  • 334
  • 747
  • Thanks @MichealE2 (+1). This is exactly the sort of comparisons I wanted to pursue as this makes a good example of many issues encountered in numerical analysis. – Moo Jun 15 '23 at 01:19
  • @MichaelE2 This is wonderful. Is there a resource re: the internal function and your comments re Exp[2500.]. Just wanting increase my understanding. – ubpdqn Jun 18 '23 at 11:44
  • @ubpdqn To avoid Exp[x] under/underflow, the limits are Log@{$MinMachineNumber, $MaxMachineNumber}, around -708. and 709.. In V11.3, the behavior for machine underflow changed (see for instance, https://mathematica.stackexchange.com/q/197758 ). Before, numbers were changed to arbitrary precision; after, gradual underflow to zero through subnormal numbers became the behavior. Machine overflow has always resulted in changing the numbers to arbitrary precision. See the tutorial https://reference.wolfram.com/language/tutorial/Numbers.html#22593. – Michael E2 Jun 18 '23 at 13:51
3

The relative errors may be appended by:

nf[n_] := {t1 = Sqrt[2. Pi n] (n/E)^n, 
  t2 = Sqrt[2. Pi n] (n/E + 1/(12 E n))^n, 
  t3 = 1. n!, (t1 - t3)/t3, (t2 - t3)/t3
}

and plotted like:

DiscretePlot[nf[i][[-2 ;; -2]], {i, 1, 20}, PlotRange -> All]
DiscretePlot[nf[i][[-1 ;; -1]], {i, 1, 20}]

enter image description here

Daniel Huber
  • 51,463
  • 1
  • 23
  • 57
  • That is a good way to do it. Thanks – Moo Jun 14 '23 at 18:47
  • 3
    I'd recommend wrapping the code for nf in a Module and localizing the $t_n$ variables, so they won't leak into the Global` context. (+1) – MarcoB Jun 14 '23 at 19:19