2

According to the documentation about the pseudospectral difference-order:

It says:

enter image description here

Following the discussion here:

I found the messy behavior is always on the artificial boundary in $\omega$-direction ($u(t,\theta,\omega_{cutoff})=0$ because I want $\omega$ to be unbounded.) Perhaps, this is so called Runge phenomenon? In principle, we should not use pseudospectral difference-order for all directions. However, it is not clear how to specify them separately.

Here is code:

a = 1;
T = 1;
ωb = -15; ωt = 15;
A = 8;
γ = .1;
kT = 0.1;
φ = 0;
mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines", 
  "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n, 
    "MinPoints" -> n, "DifferenceOrder" -> o}}

With[{u = u[t,θ, ω]}, 
 eq = D[u, t] == -D[ω u,θ] - D[-A Sin[3θ] u, ω] + γ (1 + Sin[3θ])  kT  D[
       u, {ω, 2}] + γ  (1 + Sin[3θ]) D[ω u, ω];
ic = u == E^(-((ω^2 +θ^2)/(2 a^2))) 1/(2 π a) /. t -> 0];
ufun = NDSolveValue[{eq, ic, u[t, -π, ω] == u[t, π, ω], 
     u[t,θ, ωb] == 0, u[t,θ, ωt] == 0}, u, {t, 0, T}, {θ, -π, π}, {ω, ωb, ωt}, 
    Method -> mol[61], MaxSteps -> Infinity]; // AbsoluteTiming
plots = Table[
    Plot3D[Abs[ufun[t,θ, ω]], {θ, -π, π}, {ω, ωb, ωt}, AxesLabel -> Automatic, 
     PlotPoints -> 30, BoxRatios -> {Pi, ωb, 1}, 
     ColorFunction -> "LakeColors", PlotRange -> All], {t, 0, T, 
     T/10}]; // AbsoluteTiming
ListAnimate[plots] 

$t=0$

enter image description here

$t=0.8$

enter image description here

$t=0.9$

enter image description here

One can clearly see the large deviation occurs only in $\omega$-direction, which is consistent with the description as above (neither periodic nor Chebyshev).

Is it possible to have something like:

"DifferenceOrder" ->{"Pseudospectral", Automatic}

The above simply doesn't work.

Update: Finally, I figure out the problem is just due to convection-domination. The problem is depending on the ratio of convection term and diffusion term. Artificial diffusion or denser grid points is necessary.

Update(8/25): After using the implicit RungeKutta scheme, the solution is much stable. Now the another problem is the convergency.

What I expect is something similar to the following smooth behavior. enter image description here

But so far their is no such method which can arrive this, or? enter image description here

Bob Lin
  • 445
  • 2
  • 8
  • Perturbations increase at the borders of the region. It is necessary to experiment with the parameters, to determine what affects stability. I indicated several possible solutions, but there are others. And, of course, you can use other methods of solution. – Alex Trounev Aug 22 '18 at 17:14
  • Your discussion includes, "large deviation occurs only in y-direction", but y is not one of the independent variables in your code. Also, the sentence, "In principle, we should not should pseudospectral difference-order for all direction." is garbled. For better responses by readers, please correct these and any other issues in the question. – bbgodfrey Aug 22 '18 at 18:08
  • @bbgodfrey Thanks for pointing out the typos. – Bob Lin Aug 22 '18 at 18:13
  • Thanks for the suggestion! Could you please write an answer? Then I can upvote you. – Bob Lin Aug 22 '18 at 19:06
  • 1
    Your understanding for the material is wrong. Notice the following sentence: However, there're two instances where this is not the case. Pseudospectral in Mathematica is implemented following these 2 instances. – xzczd Aug 23 '18 at 05:50
  • Perhaps these two cases are with extreme Runge's phenomenon but Pseudospectral still has been applied to both directions? – Bob Lin Aug 23 '18 at 06:02
  • How to check the differenceOrder, any command? – Bob Lin Aug 23 '18 at 06:04
  • 1
    Once again, your understanding for the material is wrong. The meaning of that paragraph can be summarized as: 1. Pseudospectral derivative may cause Runge phenomenon; 2. But the implementation for Pseudospectral in NDSolve is free from Runge phenomenon. – xzczd Aug 23 '18 at 06:18
  • Okay, I see the paragraph in below, many thanks. But now I don't know why the deviation only in $\omega$-direction. Perhaps due to the boundary condition? – Bob Lin Aug 23 '18 at 07:13
  • That's possible. Naive Dirichlet b.c. approximating b.c. at infinity can cause trouble, here is an example. Also, notice if periodic b.c. is used instead in $\omega$ direction, the oscillation no longer exists. – xzczd Aug 23 '18 at 11:11
  • @xzczd Exactly, I change the dbc to pbc, and then I get something different but still not what I expect. I'm just wondering if this problem is intrinsic from PDE or due to grids. Sorry for posting so many similar question, but I still cannot solve this kind of problem entirely. – Bob Lin Aug 23 '18 at 12:01
  • @xzczd Do have a better suggestion for ABC? I try NBC and solution in your link, but it still doesn't work. – Bob Lin Aug 23 '18 at 12:46
  • 1
    I didn't post the link as a possible solution… as far as I can tell, there's no universal technique for approximating b.c. at infinity, and I don't expect artificial b.c. designed for completely different PDE to work for your problem. Efficient artificial b.c. for certain PDE usually needs to be specially designed, and sadly that's beyond my reach. – xzczd Aug 23 '18 at 12:55
  • Could I say that as long as the PDE is unbounded then the solution is always either problematic or needs some special treatment ? – Bob Lin Aug 23 '18 at 13:05
  • Not always, rough approximation for b.c. infinity may work quite well, here's an example. My personal experience is, when the solution involves wave-like behavior, treatment for b.c. at infinity usually becomes troublesome. – xzczd Aug 24 '18 at 02:19
  • Your recent update indicates that you are seeking a steady-state solution, satisfied by the PDE with D[u[t, θ, ω], t] == 0. One solution is, of course, u == 0. If there are more static solutions, there may be many more, and the initial value calculation may yield a mix of them. By the way, the choice of boundary conditions remains an issue. – bbgodfrey Aug 27 '18 at 04:56
  • I fail to reproduce the result in update 8.27, are you solving the problem mentioned at the beginning? Have you made other changes? Also, notice you're still using method of lines (method of lines is the only available method for solving dynamic PDE in NDSolve at the moment), only with a special setting for the step size. – xzczd Aug 27 '18 at 15:25
  • @xzczd Sorry, as you said, the parameters are different, but the main story is still the same. About your comment on Method of line, if that is the case, then I would say adaptive stepsize does not always guarantee the stability. – Bob Lin Aug 27 '18 at 15:32
  • @boblin As I mentioned before, the ODE solver is very robust and doesn't need any manual adjustion as long as the spatial discretization is proper. Your solution needs about 16 seconds on my laptop, while the following setting takes 14 seconds: Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> {41, 83}, "MinPoints" -> {41, 83}, "DifferenceOrder" -> 4}} What's more, if the fix function is added, only about 9 seconds is taken, so once again, the time integration is trivial. – xzczd Aug 27 '18 at 15:54
  • @xzczd How to know which kinds of discretization will be stable? – Bob Lin Aug 27 '18 at 17:24
  • Sorry, I realize that the coefficient is very different. So actually the problem is depending on the function and scales of the coefficients. – Bob Lin Aug 27 '18 at 18:02

2 Answers2

5

The computation in the question appears to suffer from a Courant instability. To illustrate, repeat the computation with higher plotting resolution and a slightly simpler code.

a = 1; T = 1; n = {61, 61};
ωb = -15; ωt = 15;
A = 8; γ = 1/10; kT = 1/10;
eq = D[u[t, θ, ω], t] == -D[ω u[t, θ, ω], θ] - D[-A Sin[3 θ] u[t, θ, ω], ω] 
  + γ (1 + Sin[3 θ]) kT  D[u[t, θ, ω], {ω, 2}] + γ (1 + Sin[3 θ]) D[ω u[t, θ, ω], ω];
ic = u[t, θ, ω] == E^(-((ω^2 + θ^2)/(2 a^2))) 1/(2 π a) /. t -> 0;
ufun = NDSolveValue[{eq, ic, u[t, -π, ω] == u[t, π, ω], u[t, θ, ωb] == 0, 
  u[t, θ, ωt] == 0}, u, {t, 0, T}, {θ, -π, π}, {ω, ωb, ωt},  
  Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", 
  "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> "Pseudospectral"}}]; 
Plot3D[Abs[ufunot[.9, θ, ω]], {θ, -π, π}, {ω, ωb, ωt}, PlotPoints -> 2 n, 
  PlotRange -> All, BoxRatios -> {Pi, ωb, 1}, ColorFunction -> "LakeColors", 
  ImageSize -> Large, AxesLabel -> {θ, ω, u}, LabelStyle -> {Black, Bold, Medium}, 
  Mesh -> None]

enter image description here

The higher plotting resolution displays significant fine structure in the numerical behavior near ω == ωt. A second plot focusing on that region makes the fine structure even more apparent.

Plot3D[Abs[ufun[T, θ, ω]], {θ, -π, π}, {ω, 12, ωt}, PlotPoints -> {122, 40}, 
  PlotRange -> All, ImageSize -> Large, AxesLabel -> {θ, ω, u}, 
  LabelStyle -> {Black, Bold, Medium}, Mesh -> None]

enter image description here

Spatial oscillations with wavelengths on the order of the cell size are the hallmark of the Courant instability. Reducing the number of grid points in ω from 61 to 59 to 57 steadily reduces the instability growth rate, and at 55 the instability disappears. Repeating the computation above with T = 10; n = {61, 55} shows no sign of the Courant instability.

enter image description here

There are, however, two obvious issues. First, waves have reached the boundaries in ω and appear to be reflecting. (The PDE is approximately advective at large Abs[ω].) Second, spatial resolution may no longer be sufficient to accurately represent the short wavelengths evident in the plot. The runtime for this computation was of order four minutes, and doubling the resolution would require over a half-hour of calculation, as well as some experimentation to find the optimal ratio of grid points in the two spatial dimensions. For completeness, here is a plot of the latter solution at T == 5, where the spatial resolution and boundary reflection problems are not yet significant.

enter image description here

bbgodfrey
  • 61,439
  • 17
  • 89
  • 156
  • Thanks. I agree with all you said since I actually did the same thing trying to optimize the grid points and boundary. However, this is really frustrating since It takes a lot of simulation time to obtain one experimental result but implies nothing useful still. What surprises me is the same thing can be done in Julia code by solving discretized matrix equation in about a minute. Would you think playing the computational method in time like explicit, implicit, Runge-Kutta methods would help? Again many thanks for your engagement to this tricky problem. – Bob Lin Aug 24 '18 at 19:56
  • Now we know this problem is really nontrivial, hope mathematica can solve this problem one day. – Bob Lin Aug 24 '18 at 20:30
  • 1
    @BobLin Mathematica can solve the problem today by the method you described in your comment above. Also, determining the largest eigenvalue of the matrix then would give the Courant limit. Expanding the PDE as a Fourier series in θ might also be interesting., decomposing the PDE into a coupled system of ODEs. Obtaining the optimal boundary condition in ω is tricky but might be feasible, starting from the Fourier decomposition method I just mentioned and then estimating analytically the boundary condition that minimizes reflections. In all, this is a very interesting problem. – bbgodfrey Aug 24 '18 at 21:49
  • @BobLin As I asked before, what difference scheme is used in that Julia code? – xzczd Aug 25 '18 at 02:14
  • I asked my friend. He said the method is not implemented in Mathematica but I will check what the specific method it is. – Bob Lin Aug 25 '18 at 07:17
  • @xzczd I just check the source code. He uses Krylov approximation methods. The detail of the method can be found here . – Bob Lin Aug 25 '18 at 16:58
  • @BobLin This is the ODE solver, right? What difference scheme is used for spatial dimension? – xzczd Aug 26 '18 at 02:21
  • Simple foward and center difference scheme. – Bob Lin Aug 26 '18 at 07:10
  • Maybe the main problem is due to time integration. – Bob Lin Aug 26 '18 at 07:11
  • For each time step, he applies a matrix exponential to current solution, and the matrix exponential is computed by Krylov approximation. – Bob Lin Aug 26 '18 at 07:17
  • 1
    @BobLin No, the time integration is trivial, and the difference scheme plays a critical role in this problem. I've included the solution here. With this solution the problem in this question can be solved in less than 10 seconds in NDSolve with Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> 50, "MinPoints" -> 50, "DifferenceOrder" -> 2}}. – xzczd Aug 26 '18 at 12:53
  • @xzczd One simple question, how can I lower the stepsize in time-integration? Since I found that sometimes the number of grids in time domain is small, therefore it fails to satisfies the Courant condition. – Bob Lin Aug 27 '18 at 13:08
  • @Bob Courant condition is necessary condition for certain full finite difference scheme i.e. every dimension of the PDE is discretized with finite difference formula, while method of lines is a method that leaves the time dimension continuous and solve the semi-discretized system with adaptive ODE or DAE solver, so I don't think it's possible to directly apply Courant condition theory to adjustion of NDSolve step size. And according to my personal experience, ODE solver of NDSolve is very robust and almost never needs manual adjustion as long as the spatial discretization is proper. – xzczd Aug 27 '18 at 13:29
  • @BobLin Anyway, if you insist on playing with adjustion of time step size, check the document of MaxStepSize, MaxStepFraction and so on. Notice the option value of these options can be a list. – xzczd Aug 27 '18 at 13:31
  • How could I know the Courant condition is fulfilled whenever I use the MethodOfLine?, The instability only happens after some particular time, so I want to make sure the finer spatial discretization would not break the Courant condition. – Bob Lin Aug 27 '18 at 13:49
  • Btw I use ** "DifferenceOrder" -> 2**, but the solution is unstable after some large time. Perhaps we need to monitor the Courant condition. – Bob Lin Aug 27 '18 at 13:52
  • Numerical instabilities grow from noise and must exponentiate many-fold before they become visible. – bbgodfrey Aug 27 '18 at 14:25
  • @bob As mentioned above, I don't think it's possible to directly apply Courant condition theory to adjustion of NDSolve step size, at least I'm not aware of such technique. Empirically adjusting the step size is more realistic in my opinion. – xzczd Aug 27 '18 at 15:59
  • @xzczd When I see my friend's code, whatever his spatial discretization is unconditionally stable and efficient. Why mathematica does not behave like that? – Bob Lin Aug 27 '18 at 19:28
  • @BobLin According to my test, NDSolve already behaves well after using the fix function, if you still find something wrong, show us the code. – xzczd Aug 28 '18 at 03:37
  • @BobLin I just checked the code samples you posted in your questions and notice though the equations are similar, tiny but nonnegligible changes have been made for several times, do these equations belong to the same family? If so, this should be clarified in your question. Besides, are you sure the Julia code is solving the same equation as in your Mathematica code? Have these tiny changes be made in the Julia code? – xzczd Aug 28 '18 at 04:22
  • In my experience, as the diffusion term is close to zero, it's inevitable to encounter the instability issue. Let me try your fix function first. – Bob Lin Aug 28 '18 at 07:00
  • @xzczd I open a new discussion here. I add your fix function but it seems to be still unstable unfortunately. – Bob Lin Aug 29 '18 at 09:54
4

You might try some different NDSolve methods. E.g.,

mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines", "SpatialDiscretization" ->
  {"TensorProductGrid", "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> o},
  Method -> {"IDA", "ImplicitSolver" -> {"GMRES"}}}

takes twice as much time as your code, but doesn't have the artifacts at the boundary at t==0.9:

Mathematica graphics

Chris K
  • 20,207
  • 3
  • 39
  • 74
  • 1
    Just a side note: this is probably related to 1. DifferentiateBoundaryConditions -> False. When IDA is chosen, the PDE is discretized to a DAE system and the b.c. is followed in a more strict way. The main disadvantage of this method is, we know, DAE solver of NDSolve is weaker than its ODE solver, and in certain cases it even fails for solvable problem. Related: https://mathematica.stackexchange.com/a/127411/1871 2. The implicit solver, because implicit method is usually unconditionally stable or almost unconditionally stable so is free from the issue discussed in bbgodfrey's answer. – xzczd Aug 25 '18 at 02:21
  • For what it's worth, I've always had much faster results solving PDEs by implementing the method of lines using VODE in Fortran, but that's much uglier than Mathematica. – Chris K Aug 25 '18 at 02:25
  • Just a side note 2: But one of the disadvantage of implicit solver is, it's usually slower than the explicit solver. – xzczd Aug 25 '18 at 02:37
  • @ChrisK Nice approach (+1). I ran your calculation to T = 10. Your solution was a bit noisier at the upper boundary than my solution but also a bit faster computationally. – bbgodfrey Aug 25 '18 at 05:24