2

Finding a quadratic polynomial of the best approximation in $L$-metrics to $\sin(x)$ on $[0,5]$, I execute

f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := NIntegrate[RealAbs[Sin[x] - a*x^2 - b*x - c], {x, 0, 5}]; 
NMinimize[{Evaluate[f[a, b, c]],   RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b, c},
AccuracyGoal -> 5, PrecisionGoal -> 5] // Timing

{3923.94, {0.945758, {a -> -0.22899, b -> 0.700517, c -> 0.278367}}}

As we see

Plot[{Sin[x], a*x^2 + b*x + c /. {a -> -0.22898986824771952, 
b -> 0.7005170433410821, c -> 0.27836651693875075}}, {x, 0, 5}]

enter image description here

, the result is satisfactory, but it takes a lot of time. How to speed up that execution and similar ones (say, with dozen variables)? I don't find out an answer here.

user64494
  • 26,149
  • 4
  • 27
  • 56
  • F[a_?NumericQ, b_?NumericQ, c_?NumericQ] := NIntegrate[(Sin[x] - a*x^2 - b*x - c)^2, {x, 0, 5}]; sol = NMinimize[{F[a, b, c], a^2 <= 20^2 && b^2 <= 20^2 && c^2 <= 20^2}, {a, b, c}] – cvgmt Nov 10 '23 at 08:03
  • 1
    @cvgmt: Thank you. This is another metrics ($L^2$, not $L$) so the result (with timing) {49.0156, {0.259964, {a -> -0.186771, b -> 0.533721, c -> 0.365393}}} differs . – user64494 Nov 10 '23 at 08:28
  • for L^2 metrics: f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := NIntegrate[(Sin[x] - a*x^2 - b*x - c)^2, {x, 0, 5}]; NMinimize[{Evaluate[f[a, b, c]]}, {a, b, c}, AccuracyGoal -> 5, PrecisionGoal -> 5, Method -> "NelderMead"] // Timing only 2 second. – Mariusz Iwaniuk Nov 11 '23 at 16:33
  • Can use FindMinimum with starting values given by the parabola that is 0 at 0 and Pi and 1 at Pi/2. `In[96]:= Timing[ FindMinimum[f[a, b, c], {a, -4/Pi^2}, {b, 4/Pi}, {c, 0}]]

    During evaluation of In[96]:= FindMinimum::lstol: The line search decreased the step size to within the tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the function. You may need more than MachinePrecision digits of working precision to meet these tolerances.

    Out[96]= {15.1358, {0.945758, {a -> -0.22899, b -> 0.700517, c -> 0.278367}}}`

    – Daniel Lichtblau Nov 11 '23 at 23:14
  • why do you have to use NIntegrate? is this not enough? x = Table[i, {i, 0, 5, 0.05}]; NMinimize[{Sqrt[Total@Power[Abs[(Sin[x] - a*x^2 - b*x - c)], 2]], RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b, c}, AccuracyGoal -> 5, PrecisionGoal -> 5] // Timing – Xminer Nov 12 '23 at 11:25

2 Answers2

2

This is a bit out there, but you can actually do this using the neural network facilities which opens up the possibility to use a GPU:

Remove["Global`*"];

f = Sin[#] &; (* Our target ) g = {1, #, #^2} &; ( various functions to sum with learned weights to approximate f. *)

net = NetGraph[{ FunctionLayer[g, "Input" -> "Real", "Output" -> Length[g[0]]], NetArrayLayer["Output" -> Length[g[0]]], DotLayer[], FunctionLayer[-f[#] &], TotalLayer[], FunctionLayer[RealAbs]}, {NetPort["Input1"] -> 1, 1 -> 3, 2 -> 3, NetPort["Input1"] -> 4, 4 -> 5, 3 -> 5, 5 -> 6}]; net = NetInitialize[net]; training = # -> 0 & /@ RandomReal[{0, 5}, 100000]; net = NetTrain[net, training, TargetDevice -> "GPU", BatchSize -> 256]; coeffs = NetExtract[net, 2][]; Plot[{f[x], coeffs . g[x]}, {x, 0, 5}]

The resulting coefficients coeffs are the optimized {c,b,a} in your quadratic, so the polynomial is given by the dot product coeffs . {1, x, x^2} You may wonder where the integration has gone. Well I believe the gradients are accumulated over the whole batch, so the overall change to the parameters is based on evaluation at many random points in the domain, not just one at a time. In this way, the effect of batching is similar to Monte Carlo integrating the loss first, then updating the parameters so they produce a lower loss for the overall batch.


You could also consider not doing the integration, but instead sampling at random points. This cuts down the time by a lot, and decreases the accuracy. It is somewhat similar to the neural method above, but uses global optimization instead and only requires a small adjustment to your code:

f[a_?NumericQ, b_?NumericQ, c_?NumericQ] :=
 Sum[RealAbs[Sin[x] - a*x^2 - b*x - c], {x, RandomReal[{0, 5}, 500]}, 
  Method -> "Procedural"]
{err, sol} = 
 Quiet[NMinimize[{Evaluate[f[a, b, c]], 
    RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b, 
    c}, MaxIterations -> 100, 
   Method -> {"DifferentialEvolution", "PostProcess" -> False}],
  NMinimize::cvmit]

Extending to other norms is easy too, just define f like this, where p is a number, or $\infty$ for the $L_\infty$ max norm:

f[a_?NumericQ, b_?NumericQ, c_?NumericQ] :=
 Norm[Sin[#] - a*#^2 - b*# - c & /@ 
   RandomReal[{0, 5}, 500], p]
flinty
  • 25,147
  • 2
  • 20
  • 86
  • Thank you for your very interesting answer. Unfortunately, the code does not work on my comp, returning "NetTrain::mxgpulibdwnld: TargetDevice -> "GPU" requires additional libraries to be installed. Automatic download is starting. Please check the list of supported GPU compute capabilities in the documentation page for TargetDevice, and ensure an NVIDIA GPU with supported compute capability is installed. Also ensure that an NVIDIA driver of version 511.65 or higher is installed" and so on. What is your result for a,b,c? How much time does it take? – user64494 Nov 11 '23 at 07:16
  • When you get that message the first time, you need to wait for it to download all the libraries (assuming you have a supported GPU), then quit mathematica completely, and restart, but after that it should work. You can use CPU instead otherwise. It takes around 20s to 40s. I get coeffs a,b,c as {0.361709, 0.54002, -0.186718} which looks fairly close to your result. @user64494 – flinty Nov 11 '23 at 15:47
  • My weak comp is not equipped in an NVIDIA card at all. Hope I'm clear. – user64494 Nov 11 '23 at 17:01
  • You can use TargetDevice->"CPU", however I've also added an alternative approach above which similarly relies on random sampling to cut down the evaluations. @user64494 – flinty Nov 11 '23 at 18:07
  • With TargetDevice->"CPU" I obtain "LibraryFunction::load: The library C:\Users\Maryan\AppData\Roaming\Mathematica\Paclets\Repository\MXNetResources-WIN64-13.3.20\LibraryResources\Windows-x86-64\zlibwapi.dll cannot be loaded." – user64494 Nov 11 '23 at 19:21
  • Your second approach is fast, but unprecise. Even f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := Sum[RealAbs[Sin[x] - a*x^2 - b*x - c], {x, RandomReal[{0, 5}, 10000]}, Method -> "Procedural"] NMinimize[{Evaluate[f[a, b, c]], RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b, c}, MaxIterations -> 500, Method -> {"DifferentialEvolution", "PostProcess" -> False}] // Timing results in {117.625, {1905.93, {a -> -0.239461, b -> 0.740007, c -> 0.257833}}}. – user64494 Nov 11 '23 at 19:39
1

Have you considered using Orthogonalize for this? This finds an orthogonal basis for the powers of x over [0,5] and then projects Sin onto it to find the coefficients in that basis. The result looks the same and it's nearly instantaneous for the quadratic case:

innerProduct[f_, g_] := Integrate[f g, {x, 0, 5}]
basis = Orthogonalize[Table[x^k, {k, 0, 2}], innerProduct];
result = 
 Sum[Projection[Sin[x], basis[[i]], innerProduct], {i, 
   Length[basis]}]
Plot[{result, Sin[x]}, {x, 0, 5}]

enter image description here

You get exact coefficients:

CoefficientList[result, x]

(* {2/5 Sin[5/2]^2 + 3/25 (5 + 5 Cos[5] - 2 Sin[5]) + 1/25 (13 - 13 Cos[5] + 30 Sin[5]), -(6/ 125) (5 + 5 Cos[5] - 2 Sin[5]) - 6/125 (13 - 13 Cos[5] + 30 Sin[5]), 6/625 (13 - 13 Cos[5] + 30 Sin[5])} *)

(* {0.365393, 0.533721, -0.186771} *)


This is $L_2$ only though, and unfortunately I don't think you can define an inner product with an $L_1$ norm as it would violate the parallelogram law.

flinty
  • 25,147
  • 2
  • 20
  • 86
  • +1. Many thanks from me to you for your interest to the question and your work. All that works well in $L^2$, but I am interested in the general case $L^p, p \ge 1$. – user64494 Nov 11 '23 at 17:06
  • This is a touchstone for global optimization which is slow in the case of many variables, say dozen. – user64494 Nov 11 '23 at 17:14