5

I have a list of data existing as {{{x, y, z}, f},...} from which I construct a 3D interpolation function using Interpolation[ ]. I am able to construct a vector field from the gradient of this scalar field, I have verified this looks correct by plotting it.

I wish to find the 'attractors' of this vector field.

By 'attractor' I mean the points at which a particle would end up at $t \rightarrow \infty$ by following the field. I.e. they flow down the gradient field to these points. I think attractor is the right name for this, please let me know if I am using the wrong word. I have also seen them referred to as $\omega$ limits.

I would like to use mathematica to find all of these points and here I begin to struggle. My initial attempt was to use:

interp = Table[
    {{dat[[i, 1]], dat[[i, 2]], dat[[i, 3]]}, 1/(1 + dat[[i, 7]])}, 
    {i, 1, Length[dat[[All, 1]]]}
]

intf = Interpolation[interp]

intfd[x_,y_,z_] :=Evaluate[D[intf[x,y,z],{{x,y,z}}]]

Table[
    NMinimize[
        {Norm[intfd[x,y,z]],-4<=x<=4,-4<=y<=4,-4<=z<=4},
        {{x,-5,5},{y,-5,5},{z,-5,5}},
        Method->{"NelderMead","RandomSeed"->i}
    ],
    {i,1,20}
]

To try to find the points of 0 gradient. I'm aware this will also find the $\alpha$ limits (or sources for the field, where the particle is at $t \rightarrow -\infty$) and I'm not even 100% sure that Norm[V[x,y,z]] == 0 is a definition for one of these limits.

Also this is a crappy stochastic method for finding as many as I can. Though I can intuit where the points should be ahead of time to verify I have found them all I would like a general solution.

Can anyone suggest a better method for finding the attractors of the field?

JamesFurness
  • 143
  • 8
  • Perhaps you could use ContourPlot and plot the zero-contours of the norm of the vector field to start with. Again, this will find points other than the "attractors", but it's a start, maybe. Of course, your functions are functions of three variables? Then this might not work. – march Sep 14 '15 at 16:49
  • Is there anything in this Q&A that might be of help? Basins of attraction – MarcoB Sep 14 '15 at 16:49
  • Wouldn't these points just be the local minima of the function? Why not just use FindMinimum on intf? – Michael Seifert Sep 15 '15 at 15:59
  • Finding the Maxima of the function intf turned out to be the most stable way to do this, thanks Michael for the suggestion. – JamesFurness Sep 25 '15 at 12:25

1 Answers1

3

With a fresh head I have constructed a somewhat satisfactory solution to this problem.

Rather than stochastically using a random search we can use the "DifferentialEvolution" NMinimise method and start from a grid of points covering the space. This should then minimize to downhill to all possible attractors.

Continuing from my previous code for the gradient field:

min = ParallelTable[
Table[
  Table[
    NMinimize[{
              Norm[intfd[x, y, z]], 
              -4 <= x <= 4, -4 <= y <= 4, -4 <= z <= 4
              }, 
              {
                {x, i - 0.5, i + 0.5}, 
                {y, j - 0.5, j + 0.5}, 
                {z, k - 0.5, k + 0.5}
              }, 
            Method -> "DifferentialEvolution"],
          {i, -4, 4, 2}],
       {j, -4, 4, 2}],
    {k, -4, 4, 2}];

caseList = {res_, coord_};

zeros = Cases[Flatten[min, 2], caseList /; res == 0];

We can verify these solutions using:

Show[
  Join[
    {VectorPlot3D[intfd[x, y, z], {x, -5, 5}, {y, -5, 5}, {z, -5, 5}]},
    Table[
      Graphics3D[{Green, Sphere[{x, y, z}, 0.25] /. zeros[[i, 2]]}],
      {i, 1, Length[zeros]}]
  ]
]

Attractors from above method

Clearly this is not ideal, there is a basin of attraction where the gradient is zero, so points are clustered, not identical. A finer grid of data may help with this.

I also expect some more attractors to exist between the ones found. Again, finer data and a finer mesh of minimizer starting points may help this.

I'm still open to more refined suggestions to this problem if anyone has any!

EDIT:

I've refined the method a bit to a point where I am happy with the result.

Instead of NMinimize on the norm of the gradient I now have a recursive gradient search function to walk along the gradients until the norm is 0. This is faster and has the advantage that the $\alpha$ points can also be found by supplying a multiplier of -1 to the step. The function is:

recurWalk[vec_, sPos_, mult_, tol_, maxIt_] :=
 If[Or[
   Norm[vec[sPos[[1]], sPos[[2]], sPos[[3]]]] <= tol,
   maxIt - 1 <= 0,
   sPos[[1]] < -5,
   sPos[[1]] > 5,
   sPos[[2]] < -5,
   sPos[[2]] > 5,
   sPos[[3]] < -5,
   sPos[[3]] > 5
   ],

  (* Return {Norm, {x,y,z}} *)
  {Norm[vec[sPos[[1]], sPos[[2]], sPos[[3]]]], {x -> sPos[[1]], 
    y -> sPos[[2]], z -> sPos[[3]]}},

  (* Recursively call the next point *)
  recurWalk[vec,
   {
    sPos[[1]] + vec[sPos[[1]], sPos[[2]], sPos[[3]]][[1]]*mult, 
    sPos[[2]] + vec[sPos[[1]], sPos[[2]], sPos[[3]]][[2]]*mult, 
    sPos[[3]] + vec[sPos[[1]], sPos[[2]], sPos[[3]]][[3]]*mult
   },
   mult, tol, maxIt - 1]
]

The successes are filtered from the failures in the same way as before.

This appears to be much faster and I can now produce:

Recursive gradient walk.

Still comments and improvements are welcomed.

JamesFurness
  • 143
  • 8