5

I am attempting to find the (unstable) periodic orbits of a chaotic 2D map using the method described in this paper. The details aren't really relevant, but the method basically comes down to finding the equilibria in a set of ODEs derived from the original map. This can be a slow process -- a typical attractor of the class of maps I'm looking at will require finding equilibria of a $p$-times nested map to get all the period $p$ points. For $p = 3$ this can be well over 1000 points.

After much fiddling about have managed to get the time for this down to about five minutes (for period 3 points). But I'm hoping to analyse many attractors in this way, and possibly look beyond period 3. So it would be nice to have it faster, if at all possible.

I have looked at various other answers on Stackexchange, most notably this one and this one (because I'm only interested in the final stated, not the InterpolatingFunction part). But I can't find anything to give me any further improvements.

Here is what I have so far. In what follows $f$ is the mapping being studied, $m$ is the set of matrices required to stabilize the orbits, $p$ is the period of the unstable orbits to be found, and $v$ is the derived vector field whose equilibria correspond to the $p$-periodic points of $f$.

f[{x_, y_}] ={0.5 - 2.11803 Sin[0.238317 - 0.0526868 x^3 + x^2 (0.718304 + 0.273453 y) + x (-0.236563 + (0.0619632 - 0.435944 y) y) + y (-0.716707 + (-0.303632 + 0.215178 y) y)], 0.5 - 2.11803 Sin[0.238317 + 0.216896 x^3 + x^2 (-0.306057 - 0.439426 y) + y (-0.230838 + (0.720234 - 0.0531077 y) y) + x (-0.722432 + (0.0624582 + 0.275637 y) y)]}
m = {{{1, 0}, {0, 1}}, {{-1, 0}, {0, 1}}, {{1, 0}, {0, -1}}, {{0, -1}, {-1, 0}}, {{0, 1}, {1, 0}}};
v[k_, p_, {x_, y_}] := m[[k]].(Nest[f, {x, y}, p] - {x, y})

This is a stripped down version of what I've been using to find the equilibria. Hopefully the parameters are self-explanatory... (numics specifies the square root of the number of initial conditions, which are spread uniformly across the relevant region of phase space. There are more equilibria for higher values of $p$, so more initial conditions are required to find them... not that there is ever a guarantee that you've found them all.)

\[Epsilon] = 10.^-8; tmax = 100; p = 2; numics =  7 2^(p - 1);
initcons = Flatten[Table[{x0, y0}, {x0, -2, 3, 5/(numics - 1)}, {y0, -2, 3, 5/(numics - 1)}], 1];

equilibria = ParallelTable[
  With[{g = Simplify[V[k, p, #] &, Trig -> False]}, 
    First@Last@Reap[
      NDSolve[{{x'[t], y'[t]} == g[{x[t], y[t]}], {x[0], y[0]} == #,
        WhenEvent[
         Norm[{x'[t], y'[t]}] <= \[Epsilon] || 
          x[t] <= -3 || x[t] >= 4 || y[t] <= -3 || y[t] >= 4, 
         If[Norm[{x'[t], y'[t]}] <= \[Epsilon], 
           Sow[{x[t], y[t]}]]; "StopIntegration", 
           "LocationMethod" -> "StepEnd"
        ]}, {x, y}, {t, 0, tmax}
      ] & /@ initcons
    ]
  ], 
{k, 5}];

I'm hoping there might be speed gains to be made either in the execution of NDSolve and WhenEvent, or in the way the vector field is specified in $g$. I was inspired to post this question when I discovered, to my surprise, that changing my original g = Simplify[V[k, p, #], Trig -> False] & to g = Simplify[V[k, p, #] &, Trig -> False] gave me a time reduction of about 30%. It wasn't (and still isn't) at all obvious to me why that should help so much. So I'm hoping there might be other possibilities along those lines.

aardvark2012
  • 5,424
  • 1
  • 11
  • 22

1 Answers1

2

Replacing the second block of code in the question by

ϵ = 10.^-8; tmax = 100; p = 2; numics =  7 2^(p - 1);
initcons = Flatten[Table[{x0, y0}, {x0, -2, 3, 5/(numics - 1)}, {y0, -2, 3, 
    5/(numics - 1)}], 1];
nestf = Simplify[Nest[f, {x[t], y[t]}, p] - {x[t], y[t]}, Trig -> False];

equilibria = ParallelTable[
    First@Last@Reap[NDSolve[{{x'[t], y'[t]} == m[[k]].nestf, {x[0], y[0]} == #,
        WhenEvent[Norm[{x'[t], y'[t]}] < ϵ, {Sow[{x[t], y[t]}], "StopIntegration"}],
        WhenEvent[(x[t] < -3 || x[t] > 4 || y[t] < -3 || y[t] > 4), "StopIntegration"]},
        {}, {t, 0, tmax}, Method -> "Adams"] & /@ initcons], {k, 5}];

decreases run-time by almost 25%. More than half of the improvement results from using Method -> "Adams", with the rest from removing evaluation of v from within ParallelTable and from instructing NDSolve not to construct interpolation functions (by replacing {x, y} with { }.

Note that five values of k in ParallelTable is inefficient for a 4-processor computer. Restructuring the loops within `ParallelTable might gain an additional 20% or so in run-time, if all processors can be fully utilized.

Addendum

The suggestion immediately above can be implemented with

equilibria = Quiet@Table[ParallelMap[
    Quiet@Last[Reap[NDSolve[{{x'[t], y'[t]} == m[[k]].nestf, {x[0], y[0]} == #,
    WhenEvent[Norm[{x'[t], y'[t]}] < ϵ, {Sow[{x[t], y[t]}], "StopIntegration"}],
    WhenEvent[x[t] < -3 || x[t] > 4 || y[t] < -3 || y[t] > 4, "StopIntegration"]},
    {}, {t, 0, tmax}, Method -> "Adams"]]][[1, 1]] &, 
    initcons, Method -> "CoarsestGrained"], {k, 5}];

In all, the reduction in run-time is over 30%.

bbgodfrey
  • 61,439
  • 17
  • 89
  • 156
  • Fantastic. That's exactly what I was looking for. I get about a 65% time reduction, and a lot to learn from your answer. – aardvark2012 May 07 '17 at 08:52