9

I happen to find that DSolve can give different solutions, even a different number of solutions, for a set of differential equations just by making a change in the symbols use for function names.

Consider

1st running:

DSolve[
  {f'[t]*g[t]^4/f[t]^4 == -1, g'[t]*g[t]^3/f[t]^3 == -3/2}, 
  {f[t], g[t]}, t]
/. 
  {C[1] -> 1, C[2] -> tr} // FullSimplify

which gives three sets of solutions

(* {{g[t] -> Sqrt[3] Sqrt[-t + tr], f[t] -> 3^(1/3) (-t + tr)^(1/3)}, 
   {g[t] -> (-(-3)^(1/3) (-t + tr)^(1/3))^(3/2), 
    f[t] -> -(-3)^(1/3) (-t + tr)^(1/3)}, 
   {g[t] ->Sqrt[3] ((-1)^(2/3) (-t + tr)^(1/3))^(3/2), 
    f[t] -> (-t + tr)^(1/3) Root[-3 + #1^3 &, 3]}} *)

2nd running:

I change the function names, $f\rightarrow V, g\rightarrow H$; other things remain unchanged.

DSolve[
  {V'[t]*H[t]^4/V[t]^4 == -1, H'[t]*H[t]^3/V[t]^3 == -3/2}, 
  {V[t], H[t]}, t] 
/. 
  {C[1] -> 1, C[2] -> tr} // FullSimplify

I get different solutions and the number of solutions is only two. Why? Which solutions I should believe?

(* {{V[t] -> (-Sqrt[-3 t + 2 tr])^(2/3), H[t] -> -Sqrt[-3 t + 2 tr]}, 
    {V[t] -> (-3 t + 2 tr)^(1/3),  H[t] -> Sqrt[-3 t + 2 tr]}} *)

Edit

The problem seems to be strictly related to the alphabetical order of the variables. That is, if the variables {f, g} are changed to something with the same lexigraphical order, like {a, b} or {V, W}, then the same answer is given. If the order is reversed, like {V, H} or {g, f}, then the other answer is given.

xzczd
  • 65,995
  • 9
  • 163
  • 468
Enter
  • 1,229
  • 12
  • 22
  • @ bbgodfrey, How could it be possible... If the solutions obtained by MMA 10.3 from the two running are the same? I am using MMA 9.0.1.0. Do you feel it is very strange? – Enter Jan 04 '16 at 14:06
  • I'm using "10.3.0 for Linux x86 (64-bit) (October 9, 2015)" and I reproduce this exactly. Just taking the first input, and manually change f to V and g to H it gives back results that are not numerically the same – Jason B. Jan 04 '16 at 14:10
  • Hi @ Jason B, thanks for your clarification. I am using 9.0.1.0. for Wins 64-bits. It is really confusing! – Enter Jan 04 '16 at 14:16
  • There is a post about the same problem with Solve on the site, but I can't find it. (Maybe it's about NSolve?) I'm thinking it might be the same underlying problem. – Michael E2 Jan 04 '16 at 14:26
  • 1
    The Root function arises from (-1)^(2/3) 3^(1/3) // FullSimplify, which yields Root[-3 + #1^3 &, 3], which is correct but unexpected. – bbgodfrey Jan 04 '16 at 14:47
  • 1
    @MichaelE2 There're at least 3 related questions in this site (notice those links in the comments): http://mathematica.stackexchange.com/q/25182/1871 – xzczd Jan 05 '16 at 10:37
  • @xzczd Thanks, that's the one I was thinking of. – Michael E2 Jan 05 '16 at 12:27

3 Answers3

5

Some insight can be gained by considering the two cases given in the question but without ReplaceAll and FullSimplify

DSolve[{f'[t]*g[t]^4/f[t]^4 == -1, g'[t]*g[t]^3/f[t]^3 == -3/2}, {f[t], g[t]}, t, 
    GeneratedParameters -> A]
(* {{g[t] -> Sqrt[3] A[1] ((-t + A[1]^4 A[2])^(1/3)/A[1]^(4/3))^(3/2), 
     f[t] -> (3^(1/3) (-t + A[1]^4 A[2])^(1/3))/A[1]^(4/3)}, 
    {g[t] -> A[1] (-(((-3)^(1/3) (-t + A[1]^4 A[2])^(1/3))/A[1]^(4/3)))^(3/2), 
     f[t] -> -(((-3)^(1/3) (-t + A[1]^4 A[2])^(1/3))/A[1]^(4/3))}, 
    {g[t] -> Sqrt[3] A[1] (((-1)^(2/3) (-t + A[1]^4 A[2])^(1/3))/A[1]^(4/3))^(3/2), 
     f[t] -> ((-1)^(2/3) 3^(1/3) (-t + A[1]^4 A[2])^(1/3))/A[1]^(4/3)}} *)

DSolve[{V'[t]*H[t]^4/V[t]^4 == -1, H'[t]*H[t]^3/V[t]^3 == -3/2}, {V[t], H[t]}, t, 
    GeneratedParameters -> B]
(* {{V[t] -> B[1] (-Sqrt[-3 t B[1]^3 + 2 B[2]])^(2/3), 
     H[t] -> -Sqrt[-3 t B[1]^3 + 2 B[2]]}, 
    {V[t] -> B[1] (-3 t B[1]^3 + 2 B[2])^(1/3), 
     H[t] -> Sqrt[-3 t B[1]^3 + 2 B[2]]}} *)

With the substitution,

{f[t]^-3 -> 1/ff[t], g[t]^4 -> gg[t]}

the first of these can be rewritten as

s = DSolve[Unevaluated[{-D[f[t]^-3, t] g[t]^4/3 == -1, D[g[t]^4, t] f[t]^-3/4 == -3/2}]
    /. {f[t]^-3 -> 1/ff[t], g[t]^4 -> gg[t]}, {ff[t], gg[t]}, t]
(* {{gg[t] -> C[1] (-((3 t)/C[1]) + C[2])^2, ff[t] -> -((3 t)/C[1]) + C[2]}} *)

Thus, for a given value of {C[1], C[2]}, there are three independent values of f[t] and four independent values of g[t], for a total of twelve pairs, although some may be redundant through transformations of {C[1], C[2]}.

The parameters {C[1], C[2]} can be transformed to {A[1], A[2]} and to {B[1], B[2]} by

s /. {C[1] -> A[1]^4, C[2] -> 3 A[2]}
s /. {C[1] -> B[1]^-6, C[2] -> 2 B[1]^3 B[2]}

Thus, the solution sets for the first and second cases each are different subsets of the complete set of solutions. A similar analysis can be performed, starting with a transformation of the second case.

It is natural, therefor, to attempt to obtain the complete solution set by using

SetOptions[Solve, Method -> Reduce]

Unfortunately, this has minimal impact on the results returned by DSolve.

Let us turn now to the occurrence of Root in the first solution in the question. As mentioned in my comment above, it arises from

(-1)^(2/3) 3^(1/3) // FullSimplify
(* Root[-3 + #1^3 &, 3] *)

i.e., the third solution of (#^3 - 3) & == 0. Indeed LeafCount of (-1)^(2/3) 3^(1/3) is 11, and of Root[-3 + #1^3 &, 3] is 10, so the latter is simpler in this sense.

bbgodfrey
  • 61,439
  • 17
  • 89
  • 156
4

Let me abstractly answer your question. This is a very unintuitive part of programming with symbols.

Symbols are not variables.

If you had a Python program and you changed the name of a variable, you should expect it to execute just the same as it had before. There is nothing different about using "a" or "b" or "z" as a variable name.

But if you ask Mathematica to add some symbols together:

a + z + b

It will sort them:

a + b + z

So you can see that the name of a symbol matters. It matters in a way that most programmers are not used to. It affects where the symbol gets put in a summation. This isn't a trivial change. For example, if you plan on doing a calculation with finite precision numbers, the order of addition matters.

If you change the name of a symbol, the resulting expression will be algebraically the same, but structurally it may be substantially different. If it's substantially different, then DSolve or any other symbolic function may try to solve it in a different way.

Which solution(s) I should believe?

Generally, there is no reason that both sets of solutions can't be right.

After running DSolve or NDSolve, you should try to substitute the solutions back into the original differential equation to understand them better.

You should use the solution which is most useful for you.

Searke
  • 4,404
  • 20
  • 23
  • I would expect a + z + b and a + b + z to return the same result in any ordinary use, which is why Mathematica feels free to make the substitution. In OP's example, this behavior definitely seems like a bug to me – Jason B. Jan 04 '16 at 14:33
  • What is an ordinary use? – Searke Jan 04 '16 at 14:35
  • @Searke, thanks for your input, your answer is helpful for the understanding of why the name of a symbol matters: i.e. structurally different. As you mentioned "it affects when the symbols are put in a summation". However, in my case, there is even no summation associated with two or more variables. Moreover, my code does not include finite precision numbers, the fractions are accurate. – Enter Jan 04 '16 at 14:37
  • You cannot expect symbolic algorithms to be be indifferent to the structure of their input. That is, the method the function selects to solve the equation will not be robust enough to be the same for all algebraically equivalent expressions. This is not a property you can expect from functions. – Searke Jan 04 '16 at 14:39
  • 2
    I understand this violates a lot of intuition. As a general rule, you cannot change the name of a symbol and expect a program to evaluate the same. Ideally, symbolic algorithms will be somewhat robust so that they give consistent results for equivalent algebraic expressions. But it's important to realize there's a performance tradeoff that might have to be made for such consistency. Additionally, It wouldn't be trivial to show that such a consistency is actually even possible in general. – Searke Jan 04 '16 at 14:42
  • That said, the developers of DSolve love to see examples of how people use DSolve. It lets them know how they can improve it. If you want to forward the example to them, please send it to support@wolfram.com with an explanation that the results would ideally be more consistent. – Searke Jan 04 '16 at 14:44
  • It's a bug if DSolve returns a result that isn't correct. If it gives back a different kind of correct answer, then it's because it chose a different method for solving the equation. It chose a different method because you changed the name of a symbol. – Searke Jan 04 '16 at 14:51
  • This same thing happen can with Integrate and a number of other functions. – Searke Jan 04 '16 at 14:53
  • @Searke - You are right, both solutions are correct and you can verify that by substituting them back in. Is there any way of having DSolve report back which Method was used? I can't seem to find a list of the available option values for Method in many functions – Jason B. Jan 04 '16 at 15:16
  • Method options are tricky. They're usually not the whole story. For example, NIntegrate has a selection of Methods it might use, but these will be used after a symbolic preprocessing step, which might do god knows what. So Method options usually aren't a complete story of what happens inside. DSolve might combine multiple things together. I don't think what it does lends itself to have a named methods. Maybe above I should have said "way" instead of "method". I meant more "the way the function goes about solving the problem" rather than "the Method option to the function". – Searke Jan 04 '16 at 15:23
  • There are ways of glimpsing into what DSolve is doing. But I don't think it's very useful for most people. The key is using TraceInternal->True with Trace. See the example done here: http://mathematica.stackexchange.com/questions/55225/should-dsolve-always-return-solution-with-constant-of-integration – Searke Jan 04 '16 at 15:26
3

The solutions $x=f(t),\;y=g(t)$ of the OP's ODE parameterize the integral curves of $${dy \over dx} = {3y \over 2x}, \quad \text{or, if you prefer,} \quad 2\,{dy \over y} = 3\,{dx \over x}\,,$$ which has the general solution $$y^2 = A\,x^3\,.$$ How many solutions you get depends on which variable you solve for first, $x=f(t)$ or $y = g(t)$:

Solve[y^2 == A x^3, y]
Solve[y^2 == A x^3, x]
(*
  {{y -> -Sqrt[A] x^(3/2)},
   {y -> Sqrt[A] x^(3/2)}}

  {{x -> y^(2/3) / A^(1/3)},
   {x -> -(((-1)^(1/3) y^(2/3)) / A^(1/3))},
   {x -> ((-1)^(2/3) y^(2/3)) / A^(1/3)}}
*)

Even over the reals, solving for a square and a cube gives different numbers of solutions, two and one respectively.

Evidently, Mathematica picks which variable to solve for based on lexicographic order. This is the reason the form of the answer DSolve returns depends on the order of variables.

The OP's solutions are equivalent. One can get the relation between $A$ and the constants in each of the solutions with SolveAlways:

fgsol = DSolve[{f'[t]*g[t]^4/f[t]^4 == -1, g'[t]*g[t]^3/f[t]^3 == -3/2}, {f, g}, t];
VHsol = DSolve[{V'[t]*H[t]^4/V[t]^4 == -1, H'[t]*H[t]^3/V[t]^3 == -3/2}, {V[t], H[t]}, t];

y^2 == A x^3 /. Thread[{x, y} -> {f[t], g[t]}] /. fgsol // SolveAlways[#, t] &
y^2 == A x^3 /. Thread[{x, y} -> {V[t], H[t]}] /. VHsol // SolveAlways[#, t] &
(*
  {{A -> 0, C[1] -> 0}, {A -> C[1]^2}}
  {{C[1] -> 0, C[2] -> 0}, {A -> 1/C[1]^3}}
*)

Ignoring the trivial solutions, we see that A == C[1]^2 and A == 1/C[1]^3 respectively. It can be seen that the meaning of C[1] is different in each solution. The second constant C[2] determines the starting point on the curve of the parametrization {f[t], g[t]} (resp. {V[t], H[t]}), which will also be different in each solution.

Michael E2
  • 235,386
  • 17
  • 334
  • 747