This is a bit out there, but you can actually do this using the neural network facilities which opens up the possibility to use a GPU:
Remove["Global`*"];
f = Sin[#] &; (* Our target )
g = {1, #, #^2} &; ( various functions to sum with learned weights to approximate f. *)
net = NetGraph[{
FunctionLayer[g, "Input" -> "Real", "Output" -> Length[g[0]]],
NetArrayLayer["Output" -> Length[g[0]]],
DotLayer[],
FunctionLayer[-f[#] &],
TotalLayer[],
FunctionLayer[RealAbs]}, {NetPort["Input1"] -> 1, 1 -> 3, 2 -> 3,
NetPort["Input1"] -> 4, 4 -> 5, 3 -> 5, 5 -> 6}];
net = NetInitialize[net];
training = # -> 0 & /@ RandomReal[{0, 5}, 100000];
net = NetTrain[net, training, TargetDevice -> "GPU", BatchSize -> 256];
coeffs = NetExtract[net, 2][];
Plot[{f[x], coeffs . g[x]}, {x, 0, 5}]
The resulting coefficients coeffs are the optimized {c,b,a} in your quadratic, so the polynomial is given by the dot product coeffs . {1, x, x^2}
You may wonder where the integration has gone. Well I believe the gradients are accumulated over the whole batch, so the overall change to the parameters is based on evaluation at many random points in the domain, not just one at a time. In this way, the effect of batching is similar to Monte Carlo integrating the loss first, then updating the parameters so they produce a lower loss for the overall batch.
You could also consider not doing the integration, but instead sampling at random points. This cuts down the time by a lot, and decreases the accuracy. It is somewhat similar to the neural method above, but uses global optimization instead and only requires a small adjustment to your code:
f[a_?NumericQ, b_?NumericQ, c_?NumericQ] :=
Sum[RealAbs[Sin[x] - a*x^2 - b*x - c], {x, RandomReal[{0, 5}, 500]},
Method -> "Procedural"]
{err, sol} =
Quiet[NMinimize[{Evaluate[f[a, b, c]],
RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b,
c}, MaxIterations -> 100,
Method -> {"DifferentialEvolution", "PostProcess" -> False}],
NMinimize::cvmit]
Extending to other norms is easy too, just define f like this, where p is a number, or $\infty$ for the $L_\infty$ max norm:
f[a_?NumericQ, b_?NumericQ, c_?NumericQ] :=
Norm[Sin[#] - a*#^2 - b*# - c & /@
RandomReal[{0, 5}, 500], p]
F[a_?NumericQ, b_?NumericQ, c_?NumericQ] := NIntegrate[(Sin[x] - a*x^2 - b*x - c)^2, {x, 0, 5}]; sol = NMinimize[{F[a, b, c], a^2 <= 20^2 && b^2 <= 20^2 && c^2 <= 20^2}, {a, b, c}]– cvgmt Nov 10 '23 at 08:03{49.0156, {0.259964, {a -> -0.186771, b -> 0.533721, c -> 0.365393}}}differs . – user64494 Nov 10 '23 at 08:28L^2metrics:f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := NIntegrate[(Sin[x] - a*x^2 - b*x - c)^2, {x, 0, 5}]; NMinimize[{Evaluate[f[a, b, c]]}, {a, b, c}, AccuracyGoal -> 5, PrecisionGoal -> 5, Method -> "NelderMead"] // Timingonly 2 second. – Mariusz Iwaniuk Nov 11 '23 at 16:33FindMinimumwith starting values given by the parabola that is 0 at 0 andPiand 1 atPi/2. `In[96]:= Timing[ FindMinimum[f[a, b, c], {a, -4/Pi^2}, {b, 4/Pi}, {c, 0}]]During evaluation of In[96]:= FindMinimum::lstol: The line search decreased the step size to within the tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the function. You may need more than MachinePrecision digits of working precision to meet these tolerances.
Out[96]= {15.1358, {0.945758, {a -> -0.22899, b -> 0.700517, c -> 0.278367}}}`
– Daniel Lichtblau Nov 11 '23 at 23:14NIntegrate? is this not enough?x = Table[i, {i, 0, 5, 0.05}];NMinimize[{Sqrt[Total@Power[Abs[(Sin[x] - a*x^2 - b*x - c)], 2]], RealAbs[a] <= 20 && RealAbs[b] <= 20 && RealAbs[c] <= 20}, {a, b, c}, AccuracyGoal -> 5, PrecisionGoal -> 5] // Timing– Xminer Nov 12 '23 at 11:25