11

I mean $$\int_ 0^{2\pi}\frac {\sin (10050 t)\sin (10251 t)\cos (2022 t)} {\sin (50 t)\sin (51 t)} dt.$$

Here is my unsuccessful trial:

NIntegrate[Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0,2*Pi}, AccuracyGoal -> 5, PrecisionGoal -> 5]

NIntegrate::ncvb : NIntegrate failed to converge to prescribed accuracy after 9
recursive bisections in t near {t} = {3.15386} . NIntegrate obtained 11.171230707439639 and 8.934855726568145 for the integral and error estimates .

I don't find an answer in the documentation (see the "Finite Region Oscillatory Integration" section).

user64494
  • 26,149
  • 4
  • 27
  • 56
  • 1
    Try TrigReduce on the integrand – Andreas Sep 27 '22 at 08:26
  • 1
    I am interested rather in its numeric calculation (The documentation presents examples of such type.). – user64494 Sep 27 '22 at 10:54
  • 3
    Maybe MinRecursion -> 8, Method -> {"GaussKronrodRule", "Points" -> 21}. – Goofy Sep 27 '22 at 11:30
  • 1
    @Goofy: Many thanks from me to you. This is it. Can you present your comment as an answer, elaborating it? In particular, how did you come to these options? – user64494 Sep 27 '22 at 11:51
  • 1
    It should be noticed that NIntegrate[ Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 2*Pi}, Method -> {"LevinRule", "Kernel" -> Cos[2022*t]}, MaxRecursion -> 8] results in 8.75152 which is far away from the right one. – user64494 Sep 27 '22 at 13:01
  • As it is shown in this answer using brute force with NIntegrate works. I.e. no symbolic processing, no singularity handler, large min & max recursions. (Also, NDSolve , shmendy-solve.) – Anton Antonov Sep 29 '22 at 00:31

6 Answers6

7

Note: As OP has pointed out in a comment, a similar calculation is available here on Math SE.

OPs integral can be written as $$ \int_0^{2\pi} f(50t) f(51t) \cos(2022 t) dt $$ with the auxiliary function

f[x_] := Sin[201*x]/Sin[x];

The division can be carried out explicitly, because (X^201-Y^201)/(X-Y) is a polynomial. The answer is $$ f(x) = e^{-200 ix} + e^{-198 ix} + \ldots + e^{-2ix} + 1 + e^{2ix} + \ldots + e^{198 ix} + e^{200 ix} $$ or as Mathematica code:

falternative[x_]:=Sum[Exp[I*k*x],{k,-200,200,2}];

To check this, use

f[x]-falternative[x]//FullSimplify
(* 0 *)

OPs integrand is therefore

integrand = falternative[50*t]*falternative[51*t]*Cos[2022*t]//TrigToExp//Expand;

This returns a linear combination of terms of the form $e^{i \omega t}$ with integer $\omega$, and the coefficient of $\omega = 0$ is $3$, therefore the value of the integral is $$ \int_0^{2\pi} f(50t) f(51t) \cos(2022 t) dt = 6\pi $$

user293787
  • 11,833
  • 10
  • 28
  • 4
    +1. Thank you for your work. I knew the exact answer from https : // math . stacketchange . com/questions/4508185/evaluate - int2 - pi - 0 - frac - cos2022t - sin10050t - sin10251t - sin50t - sin #4508185 and that was a reason to ask it. I am interested rather in its numeric calculation (The documentation presents examples of such type.). – user64494 Sep 27 '22 at 10:53
  • 1
    Really interesting question and answer. EE's sometimes call sin(x)/x the "sync" functions, and there are variants based on pi, etc. – JosephDoggie Sep 27 '22 at 19:26
7

Here's a fast way:

NIntegrate[
  Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 
   2*Pi}, MinRecursion -> 8, 
  Method -> {"GaussKronrodRule", "Points" -> 21}] // AbsoluteTiming

({0.224591, 18.849555920879993`})

The integrand is analytic and the Gauss rule converges fast on analytic functions, so a high-order rule is probably going to be helpful. There are about 44K zeros. The Gauss-Kronrod rule is order 3*21+2 and the interval is divided into 2^8 subintervals. Together that's 2^8*65 or almost 17K zeros. Since the result is 5.999999999790308 * Pi, something must be working well. It seems like undersampling, but aliasing is a possible explanation for the unexpected accuracy (which I cannot explain).

SolveValues[
  Cos[2022*t]*Sin[10050*t]*Sin[10251*t] == 0 && 0 <= t <= 2*Pi, 
  t] // Length

(44245)

NIntegrate[Cos[2022*t]Sin[10050t]Sin[10251t]/Sin[50*t]/Sin[51*t], Evaluate@Flatten@{t, SolveValues[ Cos[2022*t]Sin[10050t]Sin[10251t] == 0 && 0 <= t <= 2*Pi, t]}, MinRecursion -> 0, Method -> "GaussKronrodRule"] // AbsoluteTiming

({19.1047, 18.849555921538766`})

The result is 6.000000000000003 * Pi, which is more accurate but takes much longer.

Goofy
  • 2,767
  • 10
  • 12
  • +1. I accept your answer, but others also are good and deep. – user64494 Sep 29 '22 at 05:28
  • 1
    It would be interesting to know if this would be as effective if the big peak was at unknown location, rather than at the start and end. – mikado Sep 29 '22 at 16:30
  • Perhaps you saw my answer where I used Reduce to find the zeroes. I did not check if some of the other solutions from the output of Reduce that had auxiliary constants were distinct zeroes from the ones I used. I corrected my answer and checked that we found the same zeroes after using sort. Reduce is about 7 times faster for finding the zeroes but tedious to use to obtain numerical values. – userrandrand Sep 29 '22 at 17:26
  • In my new answer I also checked that using Gauss-Legendre points directly without the overhead of NIntegrate in each sub-interval defined by the zeroes has similar speed to the answer you provided. – userrandrand Sep 29 '22 at 17:28
  • 1
    @mikado there is a peak in the middle but Plot misses it if the number of points is not increased from the default choice. For example Plot[Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 2*Pi}, PlotRange -> All, PlotPoints -> 4000]. – userrandrand Sep 29 '22 at 18:16
  • 1
    @mikado changing the upper bound to RandomReal[{1, 4}]*Pi/2 , MinRecursion to 10 and MaxRecursion to 20 (for example), using Do, after evaluating the integral 100 times, I found an average of 0.1 seconds per integral with no error using the method above. – userrandrand Sep 29 '22 at 18:26
  • @mikado This puts the peak in a random place and there seems no change in behavior: offset = RandomReal[{0, 2 Pi}] NIntegrate[Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0 + offset, 2*Pi + offset}, MinRecursion -> 8, Method -> {"GaussKronrodRule", "Points" -> 21}] // AbsoluteTiming -- as I mentioned, my guess is that the feature here is aliasing (integrating the major components of some spectral decomposition), not sampling, accidentally or otherwise, the major intervals of support or peaks. But I can't prove it. – Goofy Sep 30 '22 at 12:49
4

With patience, Monte Carlo methods can be effective

NIntegrate[
 Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 
  2*Pi}, AccuracyGoal -> 5, PrecisionGoal -> 5, MaxRecursion -> 10^6, 
 MaxPoints -> 10^8, Method -> "AdaptiveQuasiMonteCarlo"]
(* 18.8496 *)

% - 6 π (* 2.487310^-8 )

mikado
  • 16,741
  • 2
  • 20
  • 54
  • 1
    This is good, but there is no need for Monte-Carlo -- just use "GlobalAdaptive" without symbolic pre-processing and without singularity handling. – Anton Antonov Sep 28 '22 at 23:55
  • +1. Unexpected accuracy of Monte-Carlo method. – user64494 Sep 29 '22 at 05:30
  • @AntonAntonov there are reasons not to use Monte Carlo as the last method you try for a difficult integral, but it is often a very good first choice. – mikado Sep 29 '22 at 18:25
  • I have never heard that rule of thumb for 1D integrals. :) (Only for dimensions larger than, say, 5...) – Anton Antonov Sep 29 '22 at 18:34
  • A few questions: 1. Do you have experience in using Monte-Carlo integration algorithms in non-WL systems? 2. If yes, have you seen other adaptive Monte-Carlo algorithms? 3. Have you used -- and interested in -- importance sampling integration algorithms? – Anton Antonov Sep 29 '22 at 18:37
  • 2
    @AntonAntonov in principle, I am interested in those things. A major interest is in estimating the effectiveness of electronic systems (and optimising their design). In practice, average performance might be interesting (and computed by an integral). In practice, focusing on worst case behaviour avoids the need to make lots of questionable judgements about relative probability of various conditions. – mikado Sep 29 '22 at 19:46
  • This discussion is interesting. I admit that I always ignored MonteCarlo as I never had to work high dimensional integrals or complicated geometries but indeed as the error term depends on a crude measure of the function, MonteCarlo seems like the most reliable choice. It would be interesting to use Importance Sampling weighted by the amplitude of the function. Perhaps using the answer here – userrandrand Sep 30 '22 at 21:24
  • or including a custom rule for NIntegrate. – userrandrand Sep 30 '22 at 21:24
4

The issue with this integrand seems to be the highly oscillatory nature which makes it difficult to find the right step spacing. In the following I will give two methods to deal with the small step sizes.

The integrand:

g[t_] = Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t]

First method: integration using NDSolve

One possibility is to use the adaptive methods of NDSolve to find the right step sizes:

Edit: As recommended in the comments, I changed the starting point from 10^(-12) to $MachineEpsilon/2

AbsoluteTiming[
NDSolveValue[{v'[t] == g[t], v[$MachineEpsilon/2] == 0}, 
v[2 Pi], {t, $MachineEpsilon/2, 2 Pi}]/(6*Pi) - 1]

output: (*{0.391597, -2.92904*10^-6}*)

If we need more precision we can adjust the options of NDSolve, for example :

AbsoluteTiming[
NDSolveValue[{v'[t] == g[t], v[$MachineEpsilon/2] == 0}, 
v[2 Pi], {t, $MachineEpsilon/2, 2 Pi}, PrecisionGoal -> 10, 
AccuracyGoal -> 10, Method -> "ExplicitRungeKutta"]/(6*Pi) - 1] (* relative error *)

output: (*{0.438849, 1.79297*10^-10}*)

Second method: summing over sub-intervals specified by the zeroes of the function.

A more manual approach is to consider a partition of the interval defined by the zeroes of the function. Specifically, for x[r] the r'th zero of the integrand, we consider the following partition of the original interval Interval[0,2*Pi]=IntervalUnion[Interval[x[0],x[1]],Interval[x[1],x[2]],....] (this is just a representation not part of the code)

Find the zeroes:

Using Solve or SolveValues takes a lot of time. As such, Reduce is used instead as it is roughly seven times faster in this case :

a = Reduce[Cos[2022*t]*Sin[10050*t]*Sin[10251*t] == 0 && 0 <= t <= 2*Pi, t];

However, the output format is a mixture of equalities and inequalities for auxiliary constants. The code below extracts and constructs numerical zeroes from the output of Reduce:

zeroes = (List @@ a[[;; -5, 2]])~Join~
  DeleteDuplicates[Apply[Join, (#[[3, 2]] /. C[1] -> Range[0, #[[2, 5]]] &) /@ a[[-4 ;; -1]] ]]

Next, we use Gauss-Legendre points in each sub-interval. The most (computer) efficient way might be to use

NIntegrate`GaussKronrodRuleData[points,precision]

as used in the documentation on integration rules and then rescale the points according to the sub-intervals. However, for brevity and convenience, we will use GaussianQuadratureWeights from the NumericalDifferentialEquationAnalysis` package:

Needs["NumericalDifferentialEquationAnalysis`"]

points = (GaussianQuadratureWeights[21, ##] &) @@@ Partition[Sort@zeroes, 2, 1];

pointsf = SortBy[Join @@ points, First];

Total[pointsf[[All, 2]]g[pointsf[[All, 1]]]]/(6Pi) - 1 // AbsoluteTiming (* relative error *)

output: (* {0.151416, 8.65974*10^-13} *)

One should also consider the time used to obtain the zeroes but if the integral has to be used for a range of frequencies, using Reduce symbolically would be a one time cost.

userrandrand
  • 5,847
  • 6
  • 33
  • 1
    Saw your answer now after posting similar one. You could use v[$MachineEpsilon/2] == 0 instead and this should produce better result. no need to use magic number like 10^(-12) – Nasser Sep 28 '22 at 23:40
  • +1. Very interesting and nonstandard. – user64494 Sep 29 '22 at 05:33
  • @user64494 I might have seen that method applied on stack exchange in the past but I do not remember well. I have used that method in the past and I am still a bit surprised how well it works. Quadrature methods have exponential convergence on analytical functions with a finite radius of convergence. The trapezoidal rule has exponential convergence on periodic functions with a finite radius of convergence and double exponential methods are fairly efficient in handling endpoint singularities. When no reasonably fast exponential convergence is possible, perhaps it is best to use NDSolve. – userrandrand Sep 29 '22 at 12:24
4

Stripping "GlobalAdaptive" of the "extras" produces correct results quickly:

AbsoluteTiming[
 res1 =
  NIntegrate[
   Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 2*Pi}, PrecisionGoal -> 6, 
   Method -> {"GlobalAdaptive", MinRecursion -> 6, 
     MaxRecursion -> 1000, "MaxErrorIncreases" -> 10^6, 
     "SingularityDepth" -> Infinity, "SymbolicProcessing" -> False}]
 ]

(* {0.630764, 18.8496} *)

6 Pi - res1

(* 2.6378910^-11 )

AbsoluteTiming[
 res2 =
  NIntegrate[
   Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 2*Pi}, PrecisionGoal -> 12, WorkingPrecision -> 30, 
   Method -> {"GlobalAdaptive", MinRecursion -> 6, 
     MaxRecursion -> 1000, "MaxErrorIncreases" -> 10^6, 
     "SingularityDepth" -> Infinity, "SymbolicProcessing" -> False}]
 ]

(* {7.55792, 18.8495559215387594307749777268} *)

6 Pi - res2

(* 8.825728*10^-22 *)

Let us verify using an alternative method, "LocalAdaptive", and alternative accuracy and precision goals (but still without symbolic pre-processing):

AbsoluteTiming[
 res3 = NIntegrate[
   Cos[2022*t]*Sin[10050*t]*Sin[10251*t]/Sin[50*t]/Sin[51*t], {t, 0, 2*Pi}, AccuracyGoal -> 12, PrecisionGoal -> 12, 
   MaxRecursion -> 10^6, MaxPoints -> 10^8, 
   Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}]]

(* {0.877531, 18.8496} *)

6 Pi - res3

(* -4.5768110^-9 )

```

Anton Antonov
  • 37,787
  • 3
  • 100
  • 178
E. Chan-López
  • 23,117
  • 3
  • 21
  • 44
  • +1 for your extremal options. – user64494 Sep 29 '22 at 05:41
  • 1
    @AntonAntonov I tested the methods proposed so far and after replacing the upper bound 2Pi by 10Pi, this method seems to be the fastest and most precise. – userrandrand Sep 29 '22 at 19:25
  • 2
    @userrandrand Thanks. You can post another answer with a table that has the timings. – Anton Antonov Sep 29 '22 at 19:28
  • @AntonAntonov sorry it seems that I just had to increase the number of points rather than bisections in the method provided by Goofy. I wrote an answer with the timings but then deleted it as they are not enlightening other than the case of NDSolve which fails starting at 8*Pi for an unknown reason. – userrandrand Sep 29 '22 at 23:06
3

It's kind of a pain, but you can break up the integral yourself.

integrand = (Cos[2022*t]*Sin[10050*t]*Sin[10251*t])/(Sin[50*t]*Sin[51*t])

To determine where one might break it up, we can plot pieces of the integrand.

Plot[integrand, {t, 0, .0005}, PlotRange -> All]

enter image description here

and the end has the same behavior.

Plot[integrand, {t, 2 [Pi] - .0005, 2 [Pi]}, PlotRange -> All]

enter image description here

Those two plots show a substantial chunk of the integral value.

I broke NIntegrate up as follows:

First the two sections I plotted,

int1 = NIntegrate[integrand, {t, 0, .0005}]
(*5.671497870331018*)

int2 = NIntegrate[integrand, {t, 2 [Pi] - .0005, 2 [Pi]}] (5.671497870331018)

The next two segments are 1) the end of the first segment to 0.1 Pi and 2) 1.9 Pi to the start of the last segment.

int3 = NIntegrate[integrand, {t, .0005, .1 \[Pi]}]
(*-0.8981984314029957*)

int4 = NIntegrate[integrand, {t, 1.9 [Pi], 2 [Pi] - .0005}] (-0.8981984314029957)

and the final integral goes from 0.1 Pi to 1.9 Pi in 0.1 Pi increments.

int5 = Total@ Table[NIntegrate[integrand, {t, .1 \[Pi] i, .1 \[Pi] i + .1 \[Pi]}], {i, 1, 18}]
(*9.30295704371735*)

Add them all.

int = int1 + int2 + int3 + int4 + int5
(*18.84955592150306*)

Compare to the known value

6 \[Pi] // N
(*18.84955592153876*)

Pretty close. None of the integrations gave me failed to converge errors, so this method evidently works, but I have no idea why NIntegrate cannot do this automatically.

Bill Watts
  • 8,217
  • 1
  • 11
  • 28