5

I was looking for the fastest converging method to integrate a family of functions. After some tries, an old-school colleague suggested me a method that he used to use in excel to perform such task.

It relies on a simple procedure:

  1. sample the function at log-spaced sample such that $x_i = x_0 \alpha^i$ for $i=0...N$
  2. compute the values $\hat{y_i} = x_i f(x_i)$
  3. compute the integral as $\int f(x) \approx \log(\alpha)\sum{\hat{y}} $

This "trick" is based on the following equation:

$$ \int_{x_0}^{x_1}f(x)dx=\int_{x_0}^{x_1}xf(x)\frac{dx}{x} = \int_{x_0}^{x_1}xf(x)d(\log(x)) $$ since $x_i$ is logarithmically spaced,

$d(log(x)) = const. = \log(\alpha)$.

Therefore, applying the trivial rectangles rule,

$\int f(x) \approx \sum{x_i f(x_i)\left(\log(x_{i+1})-\log(x_i)\right)} = \log(\alpha)\sum{\hat{y}} $

Now... at a first look this looks like a trick to perform an integration with the rectangle rule with logarithmically spaced point and avoiding the element-wise multiplication imposing $d(log(x)) = const.$

However, when I tested this versus more standard rectangle rules, it's convergence speed was still much faster.

The question therefore is: why is this method so fast and how does it differ from a normal rectangle-rule with log-spaced evaluation points?

To support my claim, here are the results of few test I ran.

With $f(x) = \exp(-x)$ Error with exp(-x)

Since $\exp(-x)$ could be a very special case in conjunction with the logarithmically spaced points, I also tried a less "special" function (that is also of the type that I need to integrate). This is: $f(x) = \frac{1}{\left(1-\left(\frac{x}{4}\right)^2\right)^2 + \left(2\cdot0.05\cdot\frac{x}{4}\right)^2}$. (Some of you might recognize this to be the dynamic amplification factor of a resonant system with natural frequency equal to 4 and damping ratio equal to 5%.)

Mech Adm Mech Adm Error

Despite a less clear and monotonic convergence, the "magic" formula still converges faster than the rectangle rule with the same evaluation points.

Why does this happen?

Note: I initially thought this to be related with this (Integral in log-log space), but I'm now starting to think that this is a quite different topic

EDIT 1:

Following the requests of @Maxim Umansky and BlaB, here is the convergence plot in loglog scale for the two examples. It looks to me that the four methods converge with a similar power-law, but the "magic" one starts with a much lower error. (NOTE that the evaluation points for the second to fourth lines are the same)

$f(x) = \exp(-x)$

loglog convergence 1

$f(x) = \frac{1}{\left(1-\left(\frac{x}{4}\right)^2\right)^2 + \left(2\cdot0.05\cdot\frac{x}{4}\right)^2}$

loglog convergence 2

EDIT 2:

Please find below the python 3 code I used to generate these plots

import numpy as np
import matplotlib.pyplot as plt

fun = 'admittance'

if fun == 'exp': fun = lambda x: np.exp(-x) true_int = 1 elif fun == 'admittance': f0 = 4 xi = 0.05 fun = lambda x: 1/np.sqrt((1 - (x/f0)2)2 + (2xix/f0)**2) true_int = 36.546222700939126

N = np.unique(np.round(np.logspace(1, 8, 1000)))

#%% plot function plt.figure() x = np.linspace(0.001, 100, 200) y = fun(x) plt.plot(x,y) plt.xlabel('x') plt.ylabel('f(x)')

#%% open new figure plt.figure()

#%% Ractangle w/ linspaced x err = [] for n in N: x = np.linspace(0.001, 100, int(n)) y = fun(x) dx = x[1]-x[0]

err.append(np.sum(y)*dx - true_int)

plt.loglog(N, np.abs(err), label='Ractangle w/ linspaced x')

#%% Ractangle w/ logspaced x err_left = [] err_right = [] for n in N: x = np.logspace(-3, 2, int(n)) y = fun(x) dx = np.diff(x)

err_left.append(np.sum(y[:-1]*dx) - true_int)
err_right .append(np.sum(y[1: ]*dx) - true_int)

plt.loglog(N, np.abs(err_left), label='Ractangle w/ logspaced x (left)') plt.loglog(N, np.abs(err_right ), label='Ractangle w/ logspaced x (right)')

#%% ACA err_ACA = [] for n in N: x = np.logspace(-3, 2, int(n)) y = fun(x) xy = x*y dlogx = np.log(x[1]) - np.log(x[0])

err_ACA.append(np.sum(xy) * dlogx - true_int)

plt.loglog(N, np.abs(err_ACA), label=r'$\log(\alpha)\sum{xf(x)}$')

#%% plt.legend() plt.xlabel('Number of integration points') plt.ylabel('|Error|')

Maxim Umansky
  • 2,525
  • 1
  • 12
  • 15
Luca
  • 151
  • 3
  • 2
    We need to see log(Error) vs. the log of number of points, then we can see the asymptotic rate, that's what matters. If the distribution of integration points is optimized for a particular kind of integrated function that should not change the asymptotic convergence rate. – Maxim Umansky Apr 06 '21 at 16:18
  • 1
    You really need to show a convergence plot. Is the order of convergence altered ? – BlaB Apr 07 '21 at 13:06
  • @MaximUmansky I added the loglog plot of the error as you asked. The first one makes me suspect that $f(x) = \exp(-x)$ is basically the optimal case for this method. However the reason for this is not 100% clear to me as the rectangle rule should not match exactly $f(x)=x \exp(-x)$. – Luca Apr 07 '21 at 13:20
  • @BlaB is this the plot you were asking? Do you mean something else with "convergence plot"? – Luca Apr 07 '21 at 13:20
  • @Luca Ok, now we understand it, no mysteries here. The order of convergence is the same in all cases. But if you know the overall shape of the integrated function it is certainly possible to optimize the distribution of integration points. Think about the case of a function which is nonzero in [0,1] and zero everywhere else - where would you put your integration points? – Maxim Umansky Apr 07 '21 at 14:52
  • 1
    @MaximUmansky I totally understand your reasoning, but why does the rectangle rule using the SAME integration points performs much worse? I also tried the trapz rule, but the story doesn't change. To reach the same level of accuracy I need a much larger number of integration points – Luca Apr 07 '21 at 15:09
  • @Luca that's a very good question to which I do not have the answer regrettably. The order of convergence really is the key quantity to look at here. It shows that all methods are asymptotically equivalent. I'm not sure for the rest :). – BlaB Apr 07 '21 at 17:54
  • @Luca Can you post here your code showing the convergence of the integral? This looks like a curious problem, but before spending time on it I'd like to see a demonstration of it. – Maxim Umansky Apr 08 '21 at 04:52
  • @MaximUmansky Code posted :) – Luca Apr 08 '21 at 08:57
  • NOTE!!! I noticed a typo in the first equation that was missing a "/x". Sorry for this – Luca Apr 08 '21 at 08:57

0 Answers0