10

How can I solve the following integral equation for $f(x)$ on $x \in [0,1]$?

$$ (f(x) -x) \int_0^1 \left( \frac{\mathrm e^{2-2f(x)}}{\left(\mathrm e^{1-f(x)}+\mathrm e^{1-f(y)}\right)^2}- \frac{\mathrm e^{1-f(x)}}{\mathrm e^{1-f(x)}+\mathrm e^{1-f(y)}} \right) \mathrm dy + \int_0^1 \frac{\mathrm e^{1-f(x)}}{\mathrm e^{1-f(x)}+\mathrm e^{1-f(y)}} \mathrm dy= 0 $$

(f[x] - x)*Integrate[Exp[2 - 2 f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])^2 - 
Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}] + 
Integrate[Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}] == 0

Because of the exponential function inside the integral, I don't see how I could transform this into a differential equation to solve with NDSolve.

bonifaz
  • 291
  • 1
  • 7

2 Answers2

12

With some effort you can find numerical solution. Let us discretize function f at num points, e.g. num=3. Then, select some integration rule, e.g. Lobatto one:

{ab, w, err} = NIntegrate`LobattoRuleData[num, 4]

(* {{0, 0.5000, 1.000}, {0.1667, 0.667, 0.1667}, {}} *)

Next, replace Integrate with Sum at points ab with weights w and create Table of num equations evaluating at x given by ab:

sys = Table[
   (f[x] - x)*
     Sum[(Exp[2 - 2 f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])^2 - 
         Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]))*
       w[[Position[ab, y][[1, 1]]]], {y, ab}]
    +
    Sum[Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])*
      w[[Position[ab, y][[1, 1]]]], {y, ab}], {x, ab}];

Now, create list of unknowns with some starting values:

vars = Table[{f[x], 2 + 1/2 x}, {x, ab}]

(* {{f[0], 2}, {f[0.5000], 2.2500}, {f[1.000], 2.5000}} *)

and use FindRoot to solve system sys:

FindRoot[sys, vars]

(* {f[0] -> 2.28088, f[0.5000] -> 2.51548, f[1.000] -> 2.78091} *)

To get more acurate solution you have to increase num and precision of Lobatto rule abscissas and weights. Using num=64 and 32-digit precision I was able to solve original equation down to $MachineEpsilon. Result appear to be roughly linear with rapidly converging polynomial series, starting with:

f[x] = 2.28093 + 0.440941 x + 0.0538753 x^2 + 0.00521441 x^3 + 
 0.000168599 x^4
xzczd
  • 65,995
  • 9
  • 163
  • 468
  • How you compute approximate series f[x] ?.Can explain it and add to code? – Mariusz Iwaniuk Apr 28 '18 at 14:20
  • @MariuszIwaniuk InterpolatingPolynomial at the points {ab, f[ab]}, I think, although I get slightly different coefficients for x^2 and higher. – Michael E2 Apr 28 '18 at 14:25
  • I tried NonlinearModelFit and NMinimize/FindRoot with polynomial (or LegendreP expansion) inserted directly into integral equation (solving for coefficients). Coefficients vary slightly up to x^3, but x^4 term behaves randomly. I'm unable to determine if it is positive or negative :-( – Andrzej Odrzywolek Apr 28 '18 at 20:01
6

I liked @Andrzej's approach, so I wrote a complete function to package all the steps. [Originally, I used barycentric interpolation. Numerically, it's more stable as the number of grid points increases than either Newton interpolation, used by Interpolation, or expansion of the polynomial in the power basis. There's an undocumented built-in function that does this for us, Statistics`Library`BarycentricInterpolation. With fewer than a 100 or so grid points, it does not matter, except for extrapolation warnings from InterpolatingFunction when using a open rule such as Gauss-Legendre quadrature.]

ClearAll[findFunc];
SetAttributes[findFunc, HoldFirst];
findFunc[feq_, {f_, f0_}, {x_, a_, b_}, iRule_List] /; Length[iRule] >= 2 :=
  Module[{integrate, sys, vars, ivs, ab, wt, fvals, prec},
   Block[{Integrate = integrate},
    {ab, wt} = iRule[[{1, 2}]];
    prec = Precision[DeleteCases[ab, z_ /; z == 0]] /. 
      Infinity -> MachinePrecision;
    Block[{$MinPrecision = prec, $MaxPrecision = prec},
     ab = Rescale[ab, {0, 1}, {a, b}];
     sys = feq /. Equal -> Subtract /.
          integrate[g_, {y_, a, b}, ___] :> (b - a) wt.(g /. y -> # & /@ ab) /.
         x -> # & /@ ab;
     ];
    vars = f /@ ab;
    ivs = f0 /@ ab;
    fvals = vars /. FindRoot[sys, Transpose@{vars, ivs}, WorkingPrecision -> prec];
    (* {f -> Statistics`Library`BarycentricInterpolation[ab, fvals]} *)
    {f -> Interpolation[Sow@Transpose@{ab, fvals}, InterpolationOrder -> Length@ab - 1]}
    ]];

OP's equation:

ff = findFunc[
  (f[x] - x)*
    Integrate[Exp[2 - 2 f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])^2 - 
      Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}] + 
   Integrate[Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}],
  {f, 2 &}, {x, 0, 1},
  NIntegrate`GaussRuleData[16, MachinePrecision]
  ]

Check the residual of the solution:

check[sol_, x_?NumericQ] :=  (* computes residual of the equation *)
  Block[{f = sol, Integrate = NIntegrate},
   (f[x] - x)*
     Integrate[Exp[2 - 2 f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])^2 - 
       Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}] + 
    Integrate[Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}]
   ];
Plot[check[f /. ff, x], {x, 0, 1}, PlotPoints -> 500, 
  MaxRecursion -> 0] /. Line -> Point
(* extrapolation warnings *)

Mathematica graphics

To get the interpolating polynomial accurately expressed in the power basis, we need to use relatively high arbitrary-precision numbers:

idata = Reap[
    findFunc[
     (f[x] - x)*
       Integrate[Exp[2 - 2 f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]])^2 - 
         Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}] + 
      Integrate[Exp[1 - f[x]]/(Exp[1 - f[x]] + Exp[1 - f[y]]), {y, 0, 1}],
     {f, 2 &}, {x, 0, 1},
     NIntegrate`GaussRuleData[16, 32]
     ]
    ][[2, 1, 1]];
InterpolatingPolynomial[idata, x] // Expand // N
(*
  2.28093 + 0.440942 x + 0.0538637 x^2 + 0.00526438 x^3 + 
   0.0000719041 x^4 - 0.000111408 x^5 - 0.0000263257 x^6 - 
   2.26604*10^-6 x^7 + 4.33848*10^-7 x^8 + 1.95979*10^-7 x^9 + 
   3.11171*10^-8 x^10 - 1.32086*10^-10 x^11 - 1.72633*10^-9 x^12 - 
   1.62274*10^-10 x^13 - 1.34254*10^-10 x^14 + 4.19153*10^-11 x^15
*)
Michael E2
  • 235,386
  • 17
  • 334
  • 747
  • Er… what's the advantage of BarycentricInterpolation compared to Interpolation? – xzczd May 11 '18 at 04:22
  • 3
    @xzczd Hmm, I hadn't checked. I think barycentric interpolation is numerically more stable than Newton interpolation, which is used by Interpolation. That seems only to matter once you get a 100 or 200 grid nodes. In this case, the solution is nearly a straight line, so just a few nodes are needed. Thanks – Michael E2 May 11 '18 at 12:42