1

In the 4D Henon-Heiles system, it is well-known for certain parameters the attractor is a 2D torus. I am wondering how can we plot this actual torus (embedded in 3D) by somehow projecting all 4 components to some 3D space and to observe if a torus-shaped object will pop out.

I have seen people visualize this 2D torus simply by choosing 2 or 3 out of the 4 variables and plot. An example of the former is here (look for the case of quasi-periodic). An example of the latter is:

enter image description here

where the author said (in highlighted text) that the visualized torus is not the "whole torus of motion" as it is plotted using only 3 out of 4 variables of the system.

I have also seen machine-learning oriented "nonlinear dimensionality reduction" or "manifold learning" methods such as ISOMAP. However, I do not think these are the relevant methods because they depend on choosing certain parameters such as number of neighbors to consider and some of them are even stochastic so changes every time you use them, whereas the 2D torus I am after is a fundamental and concrete property of the system and its recovery shouldn't be stochastic or depend on some choice of parameters...

This seems to be a straight-forward thing to do, but so far I have been stuck on even finding an example where the complete Henon-Heiles 2D torus is visualized (this must involve somehow projecting to a 3D space using all 4 variables).

Axel Wang
  • 197
  • 7
  • Perhaps projecting onto the first three left singular vectors/principal components may be informative? This is basically a linear subcase of what a lot of machine learning algorithms do, but it can help with visualization in many cases. – whpowell96 Dec 29 '23 at 15:54
  • Why would you say this is a "linear subcase"? Even for a closed curve on a 2D torus, say the trefoil knot, it shouldn't be linear. – Axel Wang Dec 30 '23 at 00:56
  • Also I am not sure if any of the machine-learning dimensionality reduction methods actually recovers the low-dimensional attractor, if the high-dimensional dataset has one. They project the high-dimensional data to low-dimensions respecting certain features of the dataset, but I have not seen evidence that they preserve the shape of the attractor. – Axel Wang Dec 30 '23 at 01:02
  • 1
    I misunderstood your question I think. It seems that you are interested in an embedding of this attractor into $\mathbb{R}^3$, as opposed to some "extrinsic" way of reducing the dimensionality for visualization. Perhaps a time-delay embedding could be appropriate? It is known that for some attracting dynamical systems, an exact embedding can be obtained without using all of the state variables, or even only one. – whpowell96 Dec 31 '23 at 01:49
  • Hmm.. I think that's what I am looking for. I overlooked it as I thought it's mostly a method for analyzing data. – Axel Wang Dec 31 '23 at 07:03

0 Answers0