21

I'm using some iterative arithmetics to calculate wave propagation with the help of NestList. I have to use a small step size for iteration to guarantee the accuracy, which lead to too much data (e.g, 10^6 points) for further plotting, and the results consumes too much memories. Is there any clever way to cope with it? I didn't intend to use loops.

This is a simplified example:

NestList[Sin, 1, 10^6]

One of my idea is to use Sow and Reap. But I've no idea to integrate them with NestList or Nest

Any idea???

yulinlinyu
  • 4,815
  • 2
  • 29
  • 36

4 Answers4

27

Due to insistent public demand:

If, in a sequence of iterates $\{x,f(x),f(f(x)),\dots\}$, one only needs every $k$-th iterate (say, for $k=3$, you want $\{x,f(f(f(x))),f(f(f(f(f(f(x)))))),\dots\}$), then one can cleverly combine Nest[] and NestList[] like so:

NestList[Nest[f, #, k] &, start, n]

which yields a list containing the zeroth, $k$-th, $2k$-th, ... $nk$-th iterates.

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
12
Fold[f[#1] &, x, Range[#]] & /@ Range[0, 9, 3]
(* or *)
Nest[f, x, #] & /@ Range[0, 9, 3]
(* both give: *)
{x, f[f[f[x]]], f[f[f[f[f[f[x]]]]]], f[f[f[f[f[f[f[f[f[x]]]]]]]]]}

EDIT: a variation on the second method:

nestSkip[f_, x_, stepsize_Integer, numsteps_Integer] := 
   Nest[f, x, stepsize #] & /@ Range[0, numsteps]
(* examples: *)
nestSkip[g, y, 2, 2]
(* ==>  {y, g[g[y]], g[g[g[g[y]]]]} *)
nestSkip[# + 5 &, 2, 3, 3]
(* ==> {2, 17, 32, 47}  *)
kglr
  • 394,356
  • 18
  • 477
  • 896
5

J. M.'s method is elegant but in some cases it is not optimal. This is because the inner Nest[f, #, k] & does not compile the same as an explicit series of function calls. In advantageous cases if we expand the inner operation in advance we can have very large performance gains.

jmNest[f_, k_, n_, start_] :=
  NestList[Nest[f, #, k] &, start, n]

wNest[f_, k_, n_, start_] :=
  NestList[Evaluate @ Nest[f, #, k] &, start, n]

A favorable test case:

jmNest[1 + # &, 200, 1*^5, 10`] // RepeatedTiming // First
wNest[1 + # &, 200, 1*^5, 10`]  // RepeatedTiming // First
0.2407

0.00191

Here we get more than two orders of magnitude improvement, and it's easy to understand why as 200 applications of 1 + # & reduces to 200 + # &.

In a less trivial case such as three applications of Sin, which does not reduce to a simpler formula, we still have an improvement:

jmNest[Sin, 3, 1*^6, 10`] // RepeatedTiming // First
wNest[Sin, 3, 1*^6, 10`]  // RepeatedTiming // First
0.08481

0.05829

There are of course pathological cases as well where the symbolic expansion is far uglier than the sequential application:

jmNest[3 + # + Sqrt[# + 7] &, 15, 30, 10`] // RepeatedTiming // First
wNest[3 + # + Sqrt[# + 7] &, 15, 30, 10`]  // RepeatedTiming // First
0.000460

0.703

I cannot think of a simple way to intelligently select between original and pre-expansion methods that does not introduce significant overhead. There is enough potential difference in speed that perhaps a ParallelTry is worth that overhead if unused cores are available for use.

Mr.Wizard
  • 271,378
  • 34
  • 587
  • 1,371
3

Certainly not as elegant as JMs excellent solution, but it does use Sow and Reap as the OP requested and avoids repeated recalculation of intermediate results.

Clear[selectiveNestList];
selectiveNestList[function_, step_, iterations_, initialValue_] := 
 Reap[Nest[
     With[{s = function[First@#]}, 
       If[Mod[Last@#, step] === 0, 
        Sow[s]; {s, Last@# + 1}, {s, Last@# + 1}]] &, {initialValue, 
      0}, iterations   ]][[2, All, All, 1]] // First

And in use:

selectiveNestList[Cos, 2, 9, 4]

{4, Cos[Cos[4]], Cos[Cos[Cos[Cos[4]]]],Cos[Cos[Cos[Cos[Cos[Cos[4]]]]]], Cos[Cos[Cos[Cos[Cos[Cos[Cos[Cos[4]]]]]]]]}

image_doctor
  • 10,234
  • 23
  • 40