5

The following optimization has a constraint that b<0, but it's failing because the integral diverges when it tries a small positive b.

q[a_?NumericQ, b_?NumericQ] := NIntegrate[Exp[a*(x^2 - 1) + b*(x^4 - 3.1)], {x, -Infinity, Infinity}];
yy = FindMinimum[{q[a, b], b < 0}, {{a, -2}, {b, -.004}}];

(* The function value...is not a real number at {a,b} = {-0.499915,7.461849313203523`*^-7}  *)

How do I make it not try values outside the constraints?

Jerry Guern
  • 4,602
  • 18
  • 47

2 Answers2

3
Integrate[Exp[a*(x^2 - 1) + b*(x^4 - 31/10)],
 {x, -Infinity, Infinity}]

ConditionalExpression[ (Sqrt[-a]*E^(-a - a^2/(8*b) - (31*b)/10)* BesselK[1/4, -(a^2/(8*b))])/(2*Sqrt[-b]), Re[b] < 0 && Re[a] < 0]

qExact[a_, b_] = Assuming[{a < 0, b < 0},
  Integrate[Exp[a*(x^2 - 1) + b*(x^4 - 31/10)],
   {x, -Infinity, Infinity}]]

(1/2)*Sqrt[a/b]*E^(-a - a^2/(8*b) - (31*b)/10)* BesselK[1/4, -(a^2/(8*b))]

NMinimize[{qExact[a, b], a < 0, b < 0}, {a, b},
   WorkingPrecision -> 30] // Quiet // N

{4.13273, {a -> -0.499877, b -> -2.32554*10^-10}}

Since both a and b must be negative, instead of using _?NumericQ define q as

Clear[q]

q[a_?Negative, b_?Negative] :=
  NIntegrate[Exp[a*(x^2 - 1) + b*(x^4 - 31/10)],
   {x, -Infinity, Infinity}, WorkingPrecision -> 20];

yy = NMinimize[{q[a, b], -2 < a < 0, -1/2 < b < 0}, {a, b},
    WorkingPrecision -> 20] // Quiet // N

{4.13273, {a -> -0.5, b -> -6.01577*10^-10}}

Bob Hanlon
  • 157,611
  • 7
  • 77
  • 198
3

A stupid workaround is simply to replace b with -b:

q[a_?NumericQ, b_?NumericQ] := 
  NIntegrate[Exp[a*(x^2 - 1) - b*(x^4 - 3.1)], {x, -Infinity, Infinity}];
yy = FindMinimum[{q[a, b], b > 0}, {{a, -2}, {b, .004}}]
{4.13273, {a -> -0.5, b -> 8.27074*10^-8}}

without any warnings. But it's a pity that FindMinimum behaves so inflexibly.


Of course we also can overcome the b >= 0 case by defining the objective function in a special way:

Clear[q]
q[a_?NumericQ, b_?NumericQ] /; b < 0 := 
   (res = NIntegrate[Exp[a*(x^2 - 1) + b*(x^4 - 3.1)], {x, -Infinity, Infinity}];
    Sow[{a, b, res}]; res);
q[a_?NumericQ, b_?NumericQ] /; b >= 0 := 100;
yy = Reap[FindMinimum[{q[a, b], b < 0}, {{a, -2}, {b, -.004}}]];

yy[[1]]
yy[[2, 1]] // Length

FindMinimum::eit: The algorithm does not converge to the tolerance of 4.806217383937354`*^-6 in 500 iterations. The best estimated solution, with feasibility residual, KKT residual, or complementary residual of {3.08422*10^-18,0.411143,3.11538*10^-18}, is returned. >>

{4.13273, {a -> -0.499915, b -> -5.31328*10^-6}}
4043

What a mess! We have got 4043 evaluation points and lesser precise result while with the "stupid" workaround FindMinimum uses only 101 point:

Clear[q]
q[a_?NumericQ, b_?NumericQ] := 
  (res = NIntegrate[Exp[a*(x^2 - 1) - b*(x^4 - 3.1)], {x, -Infinity, Infinity}]; 
   Sow[{a, b, res}]; res);
yy = Reap[FindMinimum[{q[a, b], b > 0}, {{a, -2}, {b, .004}}]];
yy[[2, 1]] // Length
101
Alexey Popkov
  • 61,809
  • 7
  • 149
  • 368
  • Yes, replacing b with -log(c) also fixes the problem because it replaces the b<0 constraint with c needing to be real. – Jerry Guern Aug 12 '15 at 14:19
  • 1
    This does seem like an awfully clumsy workaround for a glaring design shortcoming. I always try to give the benefit of the doubt that professional software engineers had reasons for their quirky design, but this really is something that should be fixed. – Jerry Guern Aug 12 '15 at 14:22