6

In a nutshell

I have a large system of ODE. When I integrate the system from x = 0 to x = dx, NDSolve returns the result happily, it takes about a minute, and MaxMemoryUsed tells me that about 100MB was used to perform the computation. Because I have 8GB RAM, I ask NDSolve to integrate the system from x = 0 to x = 2 * dx, and it just crashes! I would expect memory required for NDSolve to increase linearly with integration time, am I wrong?

Minimal Working Example

Background

I have an extremely large first-order ODE system (59582 equations) of the form $y'(x) = f[y(x)]$, where $y(x)$ is an array of reals with Dimensions {2, 31, 31, 31}. Well, despite its size, compiled version of $f$ seems to compute the rhs pretty good, so I decided to give NDSolve a try. Because NDSolve evaluates the sytem symbolically, I create a CompiledFunction for $f$ and wrap it into another expression to protect it from symbolical evaluation and pass some extra data to the CompiledFunction:

ClearAll[$rhs, rhs];
$rhs = Compile[
   {
    {d1, _Integer},
    {d2, _Integer},
    {d3, _Integer},
    {state, _Real, 4}
   },
   Table[
    -(i + j + k + l)*state[[i, j, k, l]],
    {i, 1, 2},
    {j, 1, d1},
    {k, 1, d2},
    {l, 1, d3}
   ],
   CompilationTarget -> "C"
];
rhs[{d1_, d2_, d3_}][y_ /; ArrayQ[y, 4, MachineNumberQ]] := $rhs[d1, d2, d3, y];

The actual rhs is, of course, different from the one given above. I have chosen this one as an example which should not introduce any numerical problems (a linear system with negative real eigenvalues, what can be easier?).

All the rest is straightforward:

Here is the code for NDSolve:

ClearAll[solve];
solve[rhs_, y0_, xmax_, opts : OptionsPattern[]] := NDSolve[
   {
    y'[x] == rhs[y[x]],
    y[0] == y0
   },
   y,
  {x, 0, xmax},
  FilterRules[{opts}, Options[NDSolve]]
];

Here is the code for the initial value:

BlockRandom[
 dims = {31, 31, 31};
 init = RandomReal[1, Prepend[dims, 2]];
 init = init/Sqrt[Total[init^2, Infinity]];,
 RandomSeeding -> 3
]

And here is the value of integration time which leads to the crash on my system

dx = 0.7;

I believe that this value depends on a system MMA is running on. By the way, mine is

$Version
(*"11.2.0 for Microsoft Windows (64-bit) (September 11, 2017)"*)

The Problem

While x < dx, NDSolve works normally, and the memory it consumes increases linearly with the integration time:

mems = MaxMemoryUsed[
   solve[
     rhs[dims],
     init,
     #
   ]
] & /@ Range[0., dx, dx/10]
(*6385992, 52857744, 64484696, 71772376, 62706016, 66520768, 70335840, 80827824, 82735200, 89411304, 96087344*)

So that it takes about 100 MB to perform the computations for xmax = dx. Therefore, I wouldn't expect any problems for xmax = 2 * dx, but NDSolve just crashes the kernel for this value of xmax! Is it a bug, or do resources required by NDSolve increase faster then linearly?

xzczd
  • 65,995
  • 9
  • 163
  • 468
Anton.Sakovich
  • 1,081
  • 8
  • 8
  • 1
    Why do you use a standard solver to solve such a large number of first-order equations? Use the explicit Euler as the basis. According to my observations, NDSolve can not solve such problems in general. It can break even on a hundred equations. – Alex Trounev Sep 20 '18 at 17:49
  • 1
    If the solution begins growing very rapidly, NDSolve will use progressively smaller time steps to maintain accuracy. So, memory requirements easily can grow much faster than linearly, depending on the equations.. – bbgodfrey Sep 21 '18 at 01:27
  • Just a side note, you can add RuntimeOptions -> EvaluateSymbolically -> False, or simply modify the definition to Compile[……]//Last given you've compiled to C, to avoid symbolic calculation. – xzczd Sep 21 '18 at 07:22
  • In general, do not use NDSolve in this task. NDSolve is a black box. You never know how it will behave on such large problems with the number of equations $10^4-10^5$. Write your own numerical solution program. – Alex Trounev Sep 21 '18 at 15:44
  • 2
    @AlexTrounev, it is exactly the viewpoint I am fighting against. I really believe that a usual physicist, who is not a professional mathematician, will achieve much better results by understanding deeply how to use professional tools like Mathematica properly rather then wasting time on childish c++ programs. But thank you very much! Changing Method to an explicit one (Runge Kutta with any order) solved the problem. Now I *have* managed to make Mathematica solve a system of 60 thousand ODE for me smoothly in less then a minute :) Should you write an answer, I will accept it. – Anton.Sakovich Sep 21 '18 at 15:54
  • Congratulations, but you do not know what you will get next time, what a surprise it will be. Every program has its limitations. And it's good if you know these limitations in the case of NDSolve. – Alex Trounev Sep 21 '18 at 16:03
  • 1
    @AlexTrounev, the point is that I do not claim that Mathematica can solve any problem. What I do claim is that all problems that cannot be solved by Mathematica must be so complex that they also cannot be solved by simple c++ programs which one might expect form an average physicist. – Anton.Sakovich Sep 21 '18 at 16:07
  • You can write the code on the Wolfram Language not using C ++. – Alex Trounev Sep 21 '18 at 16:21
  • @AlexTrounev, If you are interested in the discussion, then we can proceed to the chat not to flood the comments. I am afraid that I have already violated some SE rule about that. – Anton.Sakovich Sep 21 '18 at 16:23
  • Write the answer to your question as you solved the problem. But not in the comments, but edit your question. – Alex Trounev Sep 21 '18 at 16:28
  • 1
    I do not agree that NDSolve should not be able to solve such large problems. But I do agree that for such problems, you should not blindly use automatically selected methods. As Alex says, I would try to control the method directly and make sure that the step size isn't made unreasonably small. Here's how to control methods: http://reference.wolfram.com/language/tutorial/NDSolveOverview.html – Szabolcs Sep 23 '18 at 08:35
  • 1
    Actually, such big problems would be the perfect use case for a high-level solver like NDSolve: the bulk of the computation time should be spent on calculating the RHS anyway. That part (which is usually simple) can be programmed in C if needed. That is what is effectively being done here (with Compile). The complicated part is implementing the solver method, and that's exactly where it makes sense to rely on an existing implementation. If you get serious about this and decide to go all-C++, you'd likely still end up using a library that handles the method ... so why not NDSolve then? – Szabolcs Sep 23 '18 at 09:02
  • Perhaps this is also an issue: https://mathematica.stackexchange.com/a/181165/4999 – Michael E2 Sep 23 '18 at 15:17
  • 1
    @MichaelE2 I tried that, doesn't seem to be related. – xzczd Sep 24 '18 at 08:19

1 Answers1

5

A single Method -> ExplicitRungeKutta causes ndstf warning at about t == 0.52 for xmax == 2 dx, but works well for xmax == dx on your sample. This phenomenon leads me to the following solution:

MaxStepSize -> 0.01 resolves the problem, not quite sure about the reason though.

test = solve[rhs[dims], init, 2 dx, MaxStepSize -> 0.01]; // AbsoluteTiming
(* {4.577509, Null} *)
MaxMemoryUsed[]/1024^2. MB
(* 223.226 MB*)
xzczd
  • 65,995
  • 9
  • 163
  • 468
  • Thank you for your feedback. The maximal coefficient in the solution's exponent is 2 + 31*3 = 95, which corresponds to characteristic dumping time 0.01, so I would take ten time smaller then that. NDSolve still solves the system without a crash then, but takes more then 1GB RAM. So, I suppose, @Szabolcs is right and the reason why NDSolve crashes the Kernel is uncontrolled decrease in step size. Don't have time to edit my question now, will add all my conclusions sometime. – Anton.Sakovich Sep 24 '18 at 19:30