2

I asked this question on the Computer Science stack exchange (https://cs.stackexchange.com/questions/128710/faster-computation-of-ke-x-h2), but it appears that it is more appropriate in Computational Science stack.

Essentially, I want to compute $$f(x) =\sum^n_{i = 0} k_ie^{-(x - h_i)^2},$$ where $n \geq 0$ and $k$ and $h$ are both real numbers, for various $x.$ On average, I would expect $x$ to lie between the minimum and maximum $h_i,$ $x \in (\epsilon + \min h_i, \epsilon + \max h_i).$

I want to compute this method without having to repeatedly call $\exp(x).$ Is there a way to compress this series?

If it boils down to approximating $\exp(x),$ then I would like to note that polynomial approximations will not work.

  • Is it assumed that $k_i$ and $h_i$ are two arbitrary given series? – Maxim Umansky Jul 27 '20 at 17:43
  • 1
    Could you elaborate about your comment that any approximation won't work for you. Is this a fact or can we discuss about that? That would may expand the range of good answers. I skimmed the other topic and there are already some good ones in terms of programming. – ConvexHull Jul 27 '20 at 19:02
  • We can consider other approximations (such as Pade's) but polynomial approximations will definitely not work for my application. – Venkataram Sivaram Jul 27 '20 at 19:39
  • @MaximUmansky yes, $k_i$ and $h_i$ are arbitrary sequences that are given prior to computing $f.$ – Venkataram Sivaram Jul 27 '20 at 22:39
  • I have two questions. Do you know (approximately) how large your n will be? If this is a quadrature that you are doing, i.e. if k_i are weights and f(x) is the integral, then I think you might actually solve the integral analytically. Squiting, this looks like a gaußian kernel, and there should be analytical solutions for the integral. Have you tried that? – MPIchael Jul 31 '20 at 09:47
  • There is no direct restriction for $n.$ For a particular application, however, perhaps there will be a restriction on $n.$ In general, however, $n$ is the size of a data set $D$ which contains $n$ pairs $(k_i, h_i).$

    For your second question, could you perhaps elaborate on how to solve for $f$?

    – Venkataram Sivaram Jul 31 '20 at 22:03

1 Answers1

6

Depending on how large $n$ can get and how many evaluation points $x$ you wish to use, this summation problem is well-suited to the use of fast multipole methods (FMMs); for instance, see the black-box FMM, which only requires you to tell it what kernel function you want to use. In your case, it's a simple Gaussian kernel.

smh
  • 663
  • 3
  • 12
  • 1
    Could you explain how to use FMMs here? From what I have read, it reduces a computation requiring $N^2$ operations to one needing only $N.$ In my case, however, it already only takes $N$ operations... – Venkataram Sivaram Jul 28 '20 at 01:54
  • 1
    The FMM is a fast algorithm for accelerating sums of the form $b_i=\sum_{j=1}^{N} A_{ij} y_{j}$, where $i=1,\ldots,M$. Naive (direct) evaluation requires $\mathcal{O}(MN)$ flops, whereas the FMM can do this to arbitrary precision in $\mathcal{O}(M+N)$ flops, with an increasing constant as you dial up the accuracy. In your case, the $b_i$'s would be values of $f$ sampled at $M$ points $x_i$, $A_{ij}=e^{-(x_i-h_j)^2}$, and $y_j=k_j$. I only recommend the use of FMM if $N$ and $M$ are large, e.g. more than a few thousand. – smh Jul 28 '20 at 12:20