2

Are there any "good" low-computational-cost approximations for parameterized Kaiser window generation suitable for small systems/languages that do not include Bessel functions in their standard library?

(maybe only to 8 to 12-bit accuracy, and something that can be run in Basic on an Apple II, et.al.)

hotpaw2
  • 35,346
  • 9
  • 47
  • 90
  • 1
    I don't know how good this technique is, but CORDIC has been adapted to compute the Kaiser window: https://link.springer.com/article/10.1007%2Fs11265-013-0781-z – MBaz Feb 18 '17 at 21:59
  • Apple II? Computational wise LUT will probably be the lowest costing approach. Only the storage is the concern. Or may be you can truncate the polynomial Bessel series with enough terms for 12 bit accuracy. – Fat32 Feb 18 '17 at 23:46
  • 2
    Kaiser is an approximation of the DPSS (discrete prolate spheroidal sequence) a.k.a. Slepian window. So do you need Kaiser in itself or would you accept something with similar performance? – Olli Niemitalo Mar 04 '17 at 08:49

2 Answers2

7

just implement the Modified Bessel function. it's easy.

i always like my window definitions centered about zero, since pretty much all of them are even symmetry.

i'll do this in discrete-time, but it's essentially the same thing in continuous-time.

Kaiser window:

$$ w[n] \triangleq \begin{cases} \frac{1}{I_0(\beta)} I_0\left(\beta \sqrt{1 - \left(\frac{n}{M/2}\right)^2} \right) \quad \quad & |n| \le M/2 \\ 0 & |n|>M/2 \\ \end{cases} $$

$I_0(x)$ is the 0th-order modified Bessel function of the 1st kind. $M+1$ is the number of non-zero samples or FIR taps (the FIR filter order is $M$ and, in my centered and symmetrical case, must be even). $\beta$ is a "shape parameter" and O&S recommend this heuristic:

$$ \beta = \begin{cases} 0.1102 \cdot (A-8.7) & A>50 \\ 0.5842 \cdot (A-21)^{2/5} + 0.07886 \cdot (A-21) \quad & 21 \le A \le 50 \\ 0.0 & A<21 \\ \end{cases}$$

$$ M = 2 \left\lceil \frac{A-8}{4.57 \cdot \Delta\omega} \right\rceil $$

$A$ is the desired stopband attention in dB and $\Delta\omega$ is the desired width of the transition band in normalized angular frequency.

finally, the Bessel is evaluated as:

$$ I_0(x) = 1 \ + \ \lim_{K \to \infty} \ \sum\limits_{k=1}^{K} \left(\frac{x^2}{4}\right)^{k} (k!)^{-2} $$

when you evaluate this with a computer, pick a $K$ decently large (my guess is that $K=32$ is good enough) and evaluate the summation starting with $k=K$ and work it backwards to $k=1$ to keep numerical accuracy. you might want to use Horner's method.

$$ I_0(x) \approx 1 + x^2\left( \tfrac{1}{(1!)^2 \, 4^1} + x^2\left(\tfrac{1}{(2!)^2 \, 4^2} + x^2\left(... + \, x^2\left(\tfrac{1}{((K-1)!)^2 \, 4^{K-1}} + x^2 \tfrac{1}{(K!)^2 \, 4^K} \right) \right) \right) \right) $$

you can evaluate all of the $(k!)^{-2}$ in advance with a short table.


Alright, someone made me do some work. This took about 45 minutes to code up and debug. So here is my MATLAB code for implementing the 0th-order Bessel function of the first find (which is $I_0(x)$ above):

function y = mybessel(x)
%
%   Computes the 0th-order Modified Bessel function of the first kind
%

    K = 32;

    bessel_coef =  zeros(1,K);

    kfac = 1;
    two_to_the_k = 1;

    for k = 1:K
        kfac = kfac * k;
        two_to_the_k = two_to_the_k * 2;
        bessel_coef(k) = 1/(kfac*two_to_the_k)^2;   % compute power series coefficients in advance
    end

    x = x.^2;

        y = x .* bessel_coef(K);
    for k = K-1:-1:1
        y = x .* (bessel_coef(k) + y);              % Horner's method
    end

    y = 1 + y;

end

and here is the test code:

x = linspace(-16, 16, 32*4096+1);

I_0 = besseli(0, x);            % MATLAB's modified bessel

y = mybessel(x);                % my bessel

figure(1)
plot(x, I_0)
hold on
plot(x, y)
hold off

figure(2)
plot(x, y - I_0)                % plot error

with results:

enter image description here

and error:

enter image description here

for $|x| \le 16$, the relative error is less than $10^{-15}$. the error increases with increasing $|x|$.

robert bristow-johnson
  • 20,661
  • 4
  • 38
  • 76
  • I tried the Horner's method from above to calculate a Bessel approximation. It works great for small numbers < 1.0. But my beta at 90 dB is 8.959. When I calculate the Bessel approximation for numbers that large then it gets pretty wild. Here are Bessel values for K = 7,8,9.

    7, -4100.13715 8, 1284.70023 9, -318.22047

    – philburk Aug 17 '19 at 17:42
  • well @philburk , of course the values of $\left(-\frac{x^2}{4}\right)^{k}$ will be increasing with increasing $k$, but they are divided by $(k!)^2$ which i think is increasing even faster. so the terms should start to converge. – robert bristow-johnson Aug 17 '19 at 20:40
  • If x >= k then the numerator is significantly larger than the denominator. My x is 8.959, so a K=8 is not close to converging. I found that a K of 16 was needed for the direct summation method, and a K of 18 was needed for the Horner's method. – philburk Aug 19 '19 at 16:41
  • Another thing I find puzzling. This Bessel function has three zero crossings before it reaches x=8.959. That means the Kaiser window will also have three zero crossings. Also what happens if i0(beta) is zero, or very close to zero? – philburk Aug 19 '19 at 17:15
  • @philburk Something must be wrong, because Io(0)=1 and the function keeps on going up, there should not be any zero crossings. – a concerned citizen Aug 20 '19 at 06:49
  • Io(0)=1 and the function keeps on going up,

    Your implementation does keep going up. But RBJ's implementation does not. It matches the wiggly curve for order zero on this page. http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html

    Is there more than one "zero order Bessel function of the first kind"? How can yours and RBJ's results be so completely different?

    – philburk Aug 20 '19 at 14:43
  • i don't understand how this diverges, @philburk . for any real and finite $x$, $(k!)^2$ increases at a faster rate than $x^{2k}$ as $k \to \infty$, does it not? maybe i should take this to the math SE. – robert bristow-johnson Aug 20 '19 at 18:59
  • the other thing to do is to *fix* $x$ at some finite value and let $k \to +\infty$. don't say "if $x \ge k$" because for any finite $x$ that is not going to be true for sufficiently large $k$. and Horner's method should come out the same as the direct summation for the same $K$ except for rounding errors due to finite word size. – robert bristow-johnson Aug 20 '19 at 19:03
  • As k goes to infinity, yes, the terms approach zero. But if k is small and x is close to k then the terms are very large. You said 8 "may be good enough", But for x=9, I need a K of about 16 to converge. 8 is too small. – philburk Aug 21 '19 at 00:12
  • okay, i got it. i'm having trouble remembering, but i think that for the Kaiser window i was thinking of a smaller $\beta$, but i think that a $\beta \approx 9$ is reasonable, perhaps more. that means $K$ needs to be a lot larger than 8. i think i was assuming, perhaps $|x| \le 1$ when it's really $|x| \le \beta$ and $\beta$ is not so small. – robert bristow-johnson Aug 21 '19 at 00:47
  • @robertbristow-johnson There's a small example in my answer, for As=120dB => beta~12.27, and for the response to be closely matched, in double precision, you'd need (IIRC) at least 18 terms for the series. The 18th term is 101687854974781436930691552659275933286400000000 (the 10th being 471327827215183563813027840000, awful accuracy). Granted, cosh() is not computationally cheap, either, but it seemed to be faster than the "Hornerized" polynomial, at least in LTspice, maybe because of the large numbers (truncation sacrifices accuracy, but who knows how they are truncated internally). – a concerned citizen Aug 21 '19 at 16:20
  • @aconcernedcitizen , i am showing the MATLAB code that implements the Bessel function above. it doesn't look too bad. – robert bristow-johnson Aug 21 '19 at 21:33
  • @philburk , i am showing the MATLAB code that implements the Bessel function above. it doesn't look too bad. – robert bristow-johnson Aug 21 '19 at 21:34
  • hay guys, i fucked up. i just learned that the word "Modified", means something. the toggling sign $(-1)^k$ had to be removed. That'll teach you never to trust anything i say. – robert bristow-johnson Aug 21 '19 at 23:15
  • Ahah! I removed the toggling (-1)k and now I get the same result from RBJ's code and also from "concerned citizen"'s code. bessel(8.959) = 1052.1331021. The curves also look the same. The bessel function now goes up and has no zero crossings. This also resolves my concern about the Kaiser window using a bessel(beta) in the denominator because it can be zero. Was the original code wrong or just for a different kind of bessel function? The original toggling equation matched the pictures I see of bessel curves. (BTW, I still trust you because of the 99 other true things you have taught me.) – philburk Aug 21 '19 at 23:38
  • sorry about the fuckup, @philburk . i just looked up the wrong function in the Wikipedia reference. – robert bristow-johnson Aug 21 '19 at 23:41
  • "//Was the original code wrong or just for a different kind of bessel function? The original toggling equation matched the pictures I see of bessel curves."// --- it was a different kind of bessel function. there is a difference between the Bessel Function of the First Kind, $J_\alpha(x)$, and the Modified Bessel Function of the First Kind, $I_\alpha(x)$. (and in this Kaiser Window application $\alpha=0$.) – robert bristow-johnson Aug 21 '19 at 23:50
  • @robertbristow-johnson The code looks nice, the results, too, but if OP is still after low bit precision, I still think the better alternative is the exponential or the hyperbolic cosine window: they're both parametric, come close to Kaiser, and for great attenuations (which shouldn't be that great due to bit limit) they're computationally cheaper. Unless you really like polynomials. :-) – a concerned citizen Aug 22 '19 at 14:59
6

You could try the exponential window:

$$w_n=\frac{ \exp \left[\alpha \sqrt{1-\left(\frac{n-M}{M}\right)^2}\right]}{\exp(\alpha)}$$ $$\alpha=-427.5*10^{-6} A_s^2+0.1808*A_s-3.516$$

or the hyperbolic cosine window:

$$w_n=\frac{\cosh \left[ \alpha \sqrt{1-\left(\frac{n-M}{M}\right)^2} \right]}{\cosh(\alpha)}$$ $$\alpha=-325.1*10^{-6}*A_s^2+0.1677*A_s-3.149$$

Both have similar results with themselves and the Kaiser window, while being less computational intensive.

If you want to go with Robert's answer, you could simplify the Bessel function by using Horner's method, or by applying a Gauss-Kronrod (or whatever other) integration. For example, I got this formula (after some reductions&co):

$$f(x)=\frac{\left[ ~~ \cosh(x) ~+~ 2*(\cosh(0.970941817426052*x) ~+~ \cosh(0.8854560256532099*x) ~+~ \cosh(0.7485107481711011*x) ~+~ \cosh(0.5680647467311558*x) ~+~ \cosh(0.3546048870425356*x) ~+~ \cosh(0.120536680255323*x)) \right]}{13}$$

For $\beta=12.26526$ ($A_s=120dB$), results in a these values:

$I_0(\beta)=24430.40185694905$ $f(\beta)=24430.4018627702$

which, IMHO, is pretty good and it's faster than using giga-km long numbers you'd deal with in the case of the classic formula, either as a sum or as Horner -- either way, I think even long double would not be enough for 20 terms (which is how I've truncated it).

Fat32
  • 28,152
  • 3
  • 24
  • 50
a concerned citizen
  • 1,828
  • 1
  • 9
  • 15
  • I implemented your Bessel function f(x) and get values that match the numpy.i0() function in Python. But they seem unrelated to the Bessel values in Robert's code. From Robert's code with K=16 I get values that start at 1 and wiggle down, like: http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html

    But your function looks more like a parabola than a Bessel function. https://www.oreilly.com/library/view/numpy-15/9781849515306/ch07s32.html

    But maybe that results in a better Kaiser window that does not have zero crossings...

    – philburk Aug 19 '19 at 17:30
  • What I did was to first express the Bessel Io(x) as its integral representation, $\frac{1}{\pi}\int_0^{\pi}\cosh(x\cos\theta)d\theta$, then used that with Gauss-Kronrod (or Gauss-Legendre? don't remember). If you expand Robert's series, you get monstruous numbers, even with Horner, which makes it difficult to preserve accuracy. Also, IIRC, you need some 20 terms for the series expansion to have decent approximation for $\beta\ge 120dB$. – a concerned citizen Aug 20 '19 at 06:46
  • I'm curious why your Bessel function looks so different than Robert's. Yours starts at one and goes up like a parabola. Robert's goes down and has zero crossings. Are they different kinds of Bessel functions? Is one better for the Kaiser window. (BTW, I used your hyperbolic cosine window from above and it worked great. Thanks.) – philburk Aug 20 '19 at 18:54
  • @philburk, the Kaiser window is based on the 0th-order Bessel function of the first kind. i think that's well-defined. – robert bristow-johnson Aug 21 '19 at 23:07
  • The exponential and cosh windows have nearly identical Fourier spectra. The cosh might be slightly better but I'm unsure. The same author published both, and the papers are nearly identical but somehow avoid comparisons to each other, even indirectly through known windows. They are marginally worse than the Kaiser but the ease of implementation makes the tradeoff worth it. – Kevin Yin Jun 29 '22 at 16:39
  • @KevinYin Somehow I'm not that much surprised given the trigonometric equivalence. So, from this perspective, the exp window might be "lighter" on the cycles. – a concerned citizen Jun 29 '22 at 20:05
  • The result goes against my intuition; I would expect exp to be better. cosh(x) ~ exp(x)/2 for x>2, but exp(0)/2 is half the size of cosh(0). That means the exp window should have a smaller jump discontinuity at the window edges, and a sinh window would have no discontinuity. But Mathematica could only produce a very approximate Fourier spectrum, so I cannot measure the results. – Kevin Yin Jun 29 '22 at 20:22