Consider the following data:
dat={{0.00001, 7000.}, {0.0001, 7000.}, {0.001, 7000.}, {0.002,
6999.999999999987}, {0.0045000000000000005,
6999.999999999987}, {0.008, 6999.999999999987}, {0.02,
6999.999999999987}, {0.045, 6999.999999999987},
{0.08, 4313.535758081632}, {0.15000000000000002,
3853.703772545988}, {0.30000000000000004, 3128.349789547097}, {0.5,
2800.9698445932686}, {0.7, 2437.7100097749117}, {0.9,
2363.8265106983717},
{1.1, 2393.576853673258}, {1.2999999999999998,
2383.170945978633}, {1.4849999999999999, 2302.8502915536897}};
I would like to obtain the interpolation int[x] from this data and generate the table {x, f[x]} for x belonging to some dataset xrand. This is my code:
xrand=RandomReal[MinMax[dat[[All,1]]],10^6];
int[x_]=Interpolation[dat,InterpolationOrder->1][x];
finaldata=int[xrand];//AbsoluteTiming
However, it is slow:
{1.04,Null}
The main problem is in the slow calling of int. In principle, instead of generating the interpolation, I may find a fit to dat, using e.g. NonlinearModelFit. However, if dealing with an arbitrary dataset, it would not be possible to make the procedure automatic since then one would need to change the fit function for NonlinearModelFit. Therefore, I am interested in a speedup of calling the interpolated function.
Is there any way to improve the performance?