2

I am trying to integrate a two-dimensional function purely numerically. For one coordinate the integrand can oscillate rapidly while for the other coordinate the integrand is not behaving in a special way. Since Mathematica features special integration routines for oscillatory integrands only in 1D I figured out that it is a good idea to nest two NIntegrates. As a minimal example we consider the function f[x, y, a] that is integrated over a square with edge length max:

f[x_, y_, a_] := Exp[-y^2] Sin[a x]
fint[a_, max_] := NIntegrate[NIntegrate[f[x, y, a], {x, 0, max}, Method -> "LocalAdaptive"], {y, 0, max}]

For my calculation it is important to leave SymbolicProcessing activated since it speeds up my calculation by a factor of approximately 100. Now, if I try to integrate f[x, y, a] for different values of a then the following code does the trick:

max0 := 10
alist := N@Subdivide[1, 1000, 50]
result := Table[fint[a, max0], {a, alist}] 
result
WaitAll[result]

In my case a single integration can take several minutes up to an hour. Over time the memory fills up until the kernels crash since there is no more free memory. As far as I understand Mathematica does not clear the cache of previous (finished) calculations automatically since it is expecting to use the results later on again. Apparently, the best solution to this issue would be if I could get Mathematica to close each subkernel after each evaluation in Table and start a fresh subkernel for the next one. However, I am not very experienced in Mathematica and I don't know how to do it.

I would be very grateful if somebody could help me and show me how it is done for my particular problem.

Edit 1: I think the problem is still not really clear so let me elaborate further. If you perform an integration with NIntegrate and you leave "SymbolicProcessing" on, NIntegrate will try to do some symbolic "magic" first to speed up the integration. Symbolic processing costs, at least to my knowledge, a lot of memory. Now, if you are using Table to calculate the integral for different parameters all the intermediate results of the symbolic processing etc. are stored in the memory but never deleted. Thus, if the computation takes a long time (and hence a lot of memory) you run out of memory eventually.

Phenoxim
  • 21
  • 3
  • The code above will also lead to a memory issue, but only slowly. You could for example rerun result and WaitAll[result] a couple of times and check the memory using MemoryInUse[]. You will see that the memory is slowly filled. I tried using ClearSystemCache and $HistoryLength without success. As far I understand the only way to fully clear the memory is to close the kernel and launch a new one. Maybe this is incorrect? – Phenoxim Dec 18 '22 at 09:35
  • "Exit[]" will close the kernel. – Daniel Huber Dec 18 '22 at 09:53
  • Would be be able to elaborate in which cases the integration takes so much time? Your given example evaluates in about 1 s on my machine, therefore I don't see the problem. "13.1.0 for Microsoft Windows (64-bit) (June 16, 2022)" – rowsi Dec 18 '22 at 10:05
  • Can you confirm that this already happens on single core (without ParallelSubmit...) and does not happen when Method -> "LocalAdaptive" is removed? – user293787 Dec 18 '22 at 10:15
  • @rowsi The given example is a very simplified version of what I try to integrate. I try to integrate a Fourier transform of a relatively complicated function over a large area of space. Unfortunately, I can not provide you with my integrand. I think the problem is that after each evaluation Mathematica stores the data it has generated for later filling slowly the memory. – Phenoxim Dec 18 '22 at 10:16
  • @user293787 Yes. I tried using Table and I set Method-> {"LocalAdaptive","SymbolicProcessing"->0}. Then there are no memory issues but the code is very slow. – Phenoxim Dec 18 '22 at 10:19
  • @user293787 Thanks for the tip. I erased the parallelization stuff. – Phenoxim Dec 18 '22 at 10:38
  • To see if NIntegrate maybe does leak memory, if I use code from here Block[{$HistoryLength=0},ClearSystemCache[];Take[Table[before=MemoryInUse[];NIntegrate[E^-Abs[y-RandomReal[]],{y,-.5,.5},Method->"LocalAdaptive"];ClearSystemCache[];MemoryInUse[]-before,{1000}],-20]] (I only added Method->"LocalAdaptive") there does maybe seem to be a memory leak. – user293787 Dec 18 '22 at 10:44
  • @user293787 I think what happens is that $HistoryLength=0 and ClearSystemCache[] do not fully clear the memory. I read somewhere else (unfortunately I forgot where) that only closing the subkernel after each evaluation and starting a new subkernel for the next evaluation clears the memory fully. – Phenoxim Dec 18 '22 at 10:57
  • 1
    Maybe try this technique setting ClearEvaluationQueueOnKernelQuit to False: https://mathematica.stackexchange.com/a/249350/363 – Chris Degnen Dec 18 '22 at 11:38
  • @ChrisDegnen Thank you. This goes already in the correct direction. Now I need to figure out how to close the kernel after one evaluation is finished and how to start a new one for the next evaluation. – Phenoxim Dec 18 '22 at 11:58
  • 1
    There is also this method which runs Mathematica from a batch file. The batch file opens one kernel which opens and runs a notebook in another kernel. It runs the batch even if a notebook hangs. – Chris Degnen Dec 18 '22 at 14:22

0 Answers0