5

An $N$-periodic complex discrete-time sequence $[x_0, \dots, x_{N-1}]$ can be resampled to an $M$-periodic sequence $[y_0, \dots, y_{M-1}]$ with $M>N$, using sinc interpolation:

$$\begin{align}y_m &= \sum_{n=-\infty}^\infty \operatorname{sinc}\left(\frac{Nm}{M} - n\right)x_{n\operatorname{mod}N} \\&= \sum_{k=-\infty}^{\infty}\sum_{n=0}^{N-1}\operatorname{sinc}\left(\frac{Nm}{M} - n - Nk\right)x_n\end{align}\tag{1}$$

where $\operatorname{mod}$ denotes the modulo operation and:

$$\operatorname{sinc}(x) = \begin{cases}1&\text{if }x=0,\\\frac{\sin(\pi x)}{\pi x}&\text{otherwise}.\tag{2}\end{cases}$$

Eq. 1 can be seen as resampling an $N$-periodic continuous-time signal from samples $x_n$ at times $n + Nk$ to samples $y_m$ at times $\frac{Nm}{M}$.

For example, a $2$-periodic complex discrete-time sequence $[x_0, x_1]$ can be resampled to a $4$-periodic sequence $[y_0, y_1, y_2, y_3]$:

$$\text{Eq. 1, }N=2,\,M=4$$ $$\Rightarrow\left\{\begin{align}y_0 &= x_0\\ y_1 &= \sum_{k=-\infty}^\infty\Bigg(\operatorname{sinc}\left(2k+\frac{3}{2}\right)x_0 + \operatorname{sinc}\left(2k+\frac{1}{2}\right)x_1\Bigg)\\ y_2 &= x_1\\ y_3 &= \sum_{k=-\infty}^\infty\Bigg(\operatorname{sinc}\left(2k+\frac{1}{2}\right)x_0 + \operatorname{sinc}\left(2k+\frac{3}{2}\right)x_1\Bigg) \end{align}\right.\tag{3}$$

The two series in Eq. 3 converge conditionally, with for example these possible rearrangements of the first series that give conflicting results if $x_0 \ne x_1$:

$$\begin{gather}\sum_{k=0}^\infty\bigg(f(-k) + f(k+1)\bigg)\\= \frac{x_0 + x_1}{2},\\ \sum_{k=0}^\infty\bigg(f(-k) + f(2k+1) + f(2k+2)\bigg)\\= \frac{(x_1-x_0)\ln(2)}{2\pi} + \frac{x_0 + x_1}{2},\\ \sum_{k=0}^\infty\bigg(f(-k) + f(3k+1) + f(3k+2) + f(3k+3)\bigg)\\= \frac{(x_1-x_0)\ln(3)}{2\pi} + \frac{x_0 + x_1}{2}, \end{gather}\tag{4}$$

with shorthand $f(k) = \operatorname{sinc}\left(2k+\frac{3}{2}\right)x_0 + \operatorname{sinc}\left(2k+\frac{1}{2}\right)x_1$.

Under which condition does the series given by Eq. 1 converge absolutely?

Olli Niemitalo
  • 13,491
  • 1
  • 33
  • 61
  • That's an interesting question and one which has no straightforward answer indeed. Given its mathematical nature, I wonder if it wouldn't be a better fit to Math.Stackexchange? But let's see if some of the more mathematically inclined users here can help out. – Florian Jul 04 '19 at 08:16
  • 1
    isn't this about the same sorta topic as this or this? i call this "bandlimited reconstruction of periodic discrete-time sampled functions" and the formula is slightly different for even $N$ vs. odd $N$. – robert bristow-johnson Jul 05 '19 at 19:06
  • @robertbristow-johnson yes (thanks for the links), and this. I was not fully convinced about the justification of Nyquist bin halving so started to look at the same thing in time domain, but so far things don't seem any more conclusive. – Olli Niemitalo Jul 05 '19 at 20:25
  • however you divide up the Nyquist bin, it must add to the original $X[\frac{N}2]$. if $\tilde{x}[n]$ is real, then Hermitian symmetry applies which means that $X[\frac{N}2]$ must be purely real (as well as $X[0]$). – robert bristow-johnson Jul 05 '19 at 20:42
  • @robertbristow-johnson This is what I have so far: You can add the positive and negative nyquist bins in any linear combination that adds up to one and still be just as valid. The only justification for splitting them at 1/2 is that the resulting series will produce real only results for real only sequences. I'm not feeling that strong about it. See my answer and comments here as well https://dsp.stackexchange.com/questions/59265/sinc-interpolation-of-pure-sine-wave-sampled-just-at-nyquist-frequency. – Cedron Dawg Jul 05 '19 at 20:45
  • hmmm, hadn't seen that question before. so here is the next thing to think about consider a continuous-time, real, and periodic function: $$ x(t) = \sum\limits_{k=-\infty}^{\infty} c_k , e^{i k t} $$ getting uniformly sampled exactly $N$ samples per period: $$ x[n] \triangleq x(t)\bigg|{t=\frac{2\pi}{N}n} $$ Now if $c{-k}=(c_k)^$ for all $k\in\mathbb{Z}$ that means $c_{-N/2}=(c_{N/2})^$, right? That means $$\Re{c_{-N/2}}=\Re{c_{N/2}}$$, right? and the effect of sampling is that $$ c_{-N/2}+c_{N/2}=X[\tfrac{N}2]$$, right? what does that lead you to regarding $\Re{c_{N/2}}$? – robert bristow-johnson Jul 05 '19 at 21:02
  • @robertbristow-johnson Okay, so you are basically saying if in order to keep the bandlimited reconstruction real for a real valued input function, use halfsies. That's pretty much what I was saying, but if the underlying signal is complex why should halfsies be preferred? I think the answer may be "because it yields the shortest interpolation path", but I haven't fleshed that out. – Cedron Dawg Jul 05 '19 at 21:38
  • if the underlying signal $$ z(t) \triangleq x(t) + j y(t) \qquad x(t),y(t),t \in \mathbb{R} $$ is complex, halfsies don't count for that underlying signal. but it does for the two real functions that make up its real and imaginary parts. (Of course, assuming $N$ is even. If $N$ is odd, then there are no halfsies.) – robert bristow-johnson Jul 06 '19 at 02:05
  • also, even if the real parts of $c_{N/2}$ and $c_{-N/2}$ add to the real value that is $X[\frac{N}2]$, they can have imaginary parts that add to zero. but the $\sin(2\pi \tfrac{N}2 t)$ portion of that Nyquist component is sampled only at its zero-crossings. So the imaginary parts of $c_{N/2}$ and $c_{-N/2}$ can be anything as long as they add to zero. it won't make any difference with the sample instances, but does make a difference in the interpolation. but you can always put a wavy line between connected points, the interpolation question is "why would you?" – robert bristow-johnson Jul 06 '19 at 02:12
  • @robertbristow-johnson So where does this leave your "unambiguous reconstruction" of a band limited continuous signal when the signal's band limit is at the Nyquist frequency of the DFT, instead of below it? Clearly you would agree that the "fluffy clouds" I drew in https://dsp.stackexchange.com/questions/59068/how-to-get-fourier-coefficients-to-draw-any-shape-using-dft are also band limited, but their limit was at N-1, not N/2. – Cedron Dawg Jul 06 '19 at 15:52
  • What makes it unambiguous is to insist that the sinusoidal component at Nyquist, that has zero crossings at the sampling instances, has zero amplitude. The Nyquist component that is 90 degrees to that is split equally between positive and negative frequencies. – robert bristow-johnson Jul 06 '19 at 19:11
  • Olli, i think the only unanswered question is what closed form functional expression is there for this infinite series?:

    $$ x(t) = \sum\limits_{n=-\infty}^{\infty} (-1)^n \ \operatorname{sinc}(t-n) $$

    where $\operatorname{sinc}(\cdot)$ is defined as you have above. Is

    $$ x(t) = \cos(\pi t) \quad \text{?} $$

    How do we know it is not this:

    $$ x(t) = \cos(\pi t) + A \sin(\pi t) \quad \text{?} $$ for $A$ being any real number including a gazillion?

    – robert bristow-johnson Jul 08 '19 at 03:32
  • well, i can do it if i am allowed to use the DFT and relate it to the double-sided Fourier series. but a direct time-domain proof requires some of the math hokus-pokus involving the Digamma function in this math.se answer. $$ $$ BTW, i brought this to the attention of the math whiz-bangs again. i dunno if they wanna fuck with it. – robert bristow-johnson Jul 08 '19 at 19:49
  • @robertbristow-johnson From your https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula reference: "When the sampled function has a bandlimit, B, less than the Nyquist frequency, x(t) is a perfect reconstruction of the original function." Makes no claim at equals. That settles it for me. My takeaway from all this is that the Nyquist bin should be considered as a half and half mix of the positive and negative frequencies. This isn't something I've put a lot of thought into before. – Cedron Dawg Jul 09 '19 at 02:33
  • @CedronDawg Well, if the continuous periodic function and the samples are real, conjugate symmetry prevails and the Nyquist bin must be real and is the sum of the positive and negative frequency components $c_{N/2}$ and $c_{-N/2}$ of the Fourier series. That means the real parts of of these two Fourier series coefficients must be half of the Nyquist bin, but the imaginary parts are negatives of each other. But a time symmetry must prevail and that makes the imaginary parts both zero. – robert bristow-johnson Jul 09 '19 at 03:24
  • @robertbristow-johnson Imagine a periodic bandlimited function defined by:

    $$ z(t) = \sum_{k=-L}^{L} C[k] e^{ikt} $$

    Furthermore, suppose C[L] and C[-L] are both non-zero.

    Now suppose you sample that function at N places on the interval at:

    $$ t = \frac{n}{N}2\pi $$

    Then take the DFT. As long as $L < N/2$ the original function can be reconstructed using the DFT coefficients without any ambiguity. As soon $L = N/2$, you have an alias overlap at the Nyquist frequency. As soon as you have any overlap, i.e. any possible aliasing, ambiguity is introduced.

    – Cedron Dawg Jul 09 '19 at 22:11
  • it's not any possible aliasing. assuming even $N$, using the most conventional scaling of the DFT, $$ C[-L]+C[L]=\frac1N Z[\tfrac{N}2] $$. since $$C[-L]=C[L]^*$$ that means the real parts of the two must be equal (which is a constraint) but the imaginary parts are negatives of each other. that corresponds to: $$ 2 \Re{C[L]} \cos(\pi t) + A \sin(\pi t) $$ where $A$ can be any real number. – robert bristow-johnson Jul 09 '19 at 23:30
  • The conjugate relationship is only implied by real valued signals. I never so constrained it; C[L] and C[-L] can be any complex values. The point is, as long as $ L < N/2 $ each C[k] falls neatly into its own bin and is recoverable. As soon as $L >= N/2$, it is possible for more than one C[k] to land in a bin, making them inseparable and unrecoverable and the reconstructed function ambiguous. For $L=N/2$ this happens if C[L] and C[-L] are non-zero as specified. That's when ambiguity gets introduced. Prior to that you have unambiguous reconstruction. – Cedron Dawg Jul 09 '19 at 23:46
  • N/2 and -N/2 are alias frequencies of each other. – Cedron Dawg Jul 09 '19 at 23:51
  • @robertbristow-johnson I rolled back to my last edit because it was not easy to see that the rearrangement of the series would not affect its convergence. – Olli Niemitalo Jul 12 '19 at 07:39
  • When one of the series is finite it isn't a problem. The W-S wiki article does give conditions that ensure absolute convergence. Your stipulation does not meet them. – Cedron Dawg Jul 12 '19 at 16:31

3 Answers3

3

I'm top-editing this since it answers the question directly.

The sinc series is fundamentally a $C/x$, so you can extract as many absolutely convergent series out of it as you want, but what is left over is still only conditionally convergent. Also, you can rescale $x$ and it is still a $C/x$ series.

Saying you have a summation to or from infinity is an informality. Formally, you have a finite sum to some value, and take the limit as that value goes to infinity.

Therefore, your first and second series should have been done like this:

$$ \lim_{L \to \infty} \sum_{k=-L}^{L} f(k) = \lim_{L \to \infty} \left[ \sum_{k=0}^{L} f(-k) + f(k+1) \right] $$ $$ = \lim_{L \to \infty} \left[ \sum_{k=0}^{L} \left( f(-k) + f(2k+1) + f(2k+2) \right) + \sum_{k=0}^{L+1} f(-k-L-1) \right] $$

Likewise, your third should have added this:

$$ \sum_{k=0}^{L+1} \left( f(-k-L-1) + f(-k-2L-3) \right) $$

Sometimes it takes a while to get around to where you should have been in the first place. I'm deleting the rest. Whoever is curious can find it in the edit history.


Proceeding informally....

First rearrange it:

$$ \begin{aligned} y_m &= \sum_{n=0}^{N-1} x[n] \sum_{k=-\infty}^{\infty} \operatorname{sinc} \left( \frac{Nm}{M} - n - Nk \right) \\ &= \sum_{n=0}^{N-1} x[n] W_m[n] \end{aligned} $$

One way to look at that is a resampled value is a linear combination (weighted average) of the sample points.

Another way is that you now have $N$ separate infinite series, all of the form:

$$ \begin{aligned} W_m[n] &= \sum_{k=-\infty}^{\infty} \operatorname{sinc} \left( \frac{Nm}{M} - n - Nk \right) \\ &= \sum_{k=-\infty}^{\infty} \frac{ \sin \left( ( Nm/M - n - Nk ) \pi\right) }{ (Nm/M - n - Nk) \pi } \\ \end{aligned} $$

Even $N$ Case:

$$ W_m[n] = \sin \left( ( Nm/M - n ) \pi\right) \sum_{k=-\infty}^{\infty} \frac{ 1 }{ (Nm/M - n - Nk) \pi } $$

Odd $N$ Case:

$$ W_m[n] = \sin \left( ( Nm/M - n ) \pi\right) \sum_{k=-\infty}^{\infty} \frac{ (-1)^k }{ (Nm/M - n - Nk) \pi } $$

Clearly, both are cases of $C/x$ series and not absolutely convergent. If $Nm/M$ is an integer all the terms are zero except perhaps the zeroth terms.

As for the second comment, if I remember correctly (and I've already proven I didn't remember well), doing it formally does away with all the rearrangement tricks. And yes, If I remember correctly, absolutely convergent series are immune to rearrangement tricks.

This too:

A series converges if and only if the sequence of partial sums converges.

A sequence converges if and only if for any given $\epsilon$ there exist a $\delta$ so for every $k > \delta$ the absolute value of the difference of the limit and the sequence value is less than $\epsilon$.

Stamp it on your forehead for formal occasions.

Disclaimer: Been a long time ...


As clearly as I think I can say it:

The only conditions for which the series in Olli's Eq (1) will converge absolutely is when all the terms heading towards infinity are zero, since then their absolute values are zero. This happens when all the $x_n$ are zero (the trivial solution) or $Nm/M$ is an integer. Both the even and odd cases under any different conditions can be rearranged to be summations of alternating monotonically decreasing sequences, therefore they converge conditionally since they diverge absolutely.


Epilogue:

There is no need to do the infinite summation at all. Direct closed form expressions exist for the odd and then even case based on the interpolation functions found when considering an inverse DFT as a continuous function. The derivation of the functions can be found in the epilogue of my answer here:

How to get Fourier coefficients to draw any shape using DFT?

The derivation is based on the definitions of the DFT, the inverse DFT, and a finite geometric summation.

Resampling the continuous function at $M$ evenly spaced (in the cycle domain) points can be done by a simple variable substitution.

$$ t = \frac{m}{M} 2\pi $$

The direct sample set to sample set equations are then as follows.

Odd case:

$$ y_m = \sum_{n=0}^{N-1} x[n] \left[ \frac{ \sin \left( N \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } { N \sin \left( \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } \right] $$

Even case, evenly split Nyquist bin:

$$ y_m = \sum_{n=0}^{N-1} x[n] \left[ \frac{ \sin \left( N \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } { N \sin \left( \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } \right] \cos \left( \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) $$

These are mathematically equivalent to taking the DFT of size $N$, zero padding it at the Nyquist frequency to size $M$ (splitting the Nyquist bin in the even case), then taking the inverse DFT to recover a $M$ point upsampled sequence. All the upsampled points lie on the underlying continuous interpolation function no matter what the point count.

For the $N=2$, $M=4$ case:

$$ \begin{aligned} y_0 &= x_0 ( 1 ) + x_1 ( 0 ) = x_0 \\ y_1 &= x_0 \left( \frac{ \sin( \pi / 2 ) }{ 2 \sin( \pi / 4 ) } \cos( \pi / 4 ) \right) + x_1 \left( \frac{ \sin( -\pi / 2 ) }{ 2 \sin( -\pi / 4 ) } \cos( -\pi / 4 ) \right) \\ &= \frac{1}{2} ( x_0 + x_1 ) \\ y_2 &= x_0 ( 0 ) + x_1 ( 1 ) = x_1 \\ y_3 &= x_0 \left( \frac{ \sin( 3 \pi / 2 ) }{ 2 \sin( 3 \pi / 4 ) } \cos( 3 \pi / 4 ) \right) + x_1 \left( \frac{ \sin( \pi / 2 ) }{ 2 \sin( \pi / 4 ) } \cos( \pi / 4 ) \right) \\ &= \frac{1}{2} ( x_0 + x_1 ) \end{aligned} $$ Which should be the results you are expecting.

An infinite number of sinc functions can now take the day off.


Suppose that instead of doing halfsies on the Nyquist bin we apportioned them as $(1/2+g)$ and $(1/2-g)$, this would alter the continuous interpolation function as follows.

$$ \begin{aligned} D(t_n) &= \left( \frac{1}{2} + g \right) e^{i(N/2) t_n } + \left( \frac{1}{2} - g \right) e^{i(-N/2) t_n } + \sum_{l=0}^{N-2} e^{i ( l - N/2 + 1 ) t_n } \\ &= \cos \left( \frac{N}{2} t_n \right) + i 2 g\sin \left( \frac{N}{2} t_n \right) + \frac{ \sin( t_n N /2 ) } { \sin( t_n / 2 ) } \cos( t_n / 2 ) - \cos( t_n N /2 ) \\ &= \frac{ \sin( N t_n/2 ) }{ \sin( t_n / 2 ) } \cos( t_n / 2 ) + i 2g\sin \left( \frac{N}{2} t_n \right) \end{aligned} $$

The extra term introduced is purely imaginary. That can be folded in, but I prefer to leave it separate when put back into the function definition.

$$ \begin{aligned} z(t) &= \sum_{n=0}^{N-1} x[n] \left[ \frac{ \sin( N (t - \frac{n}{N}2\pi) / 2 ) } { N \tan( (t - \frac{n}{N}2\pi) / 2 ) } + i \frac{2g}{N}\sin \left( N (t - \frac{n}{N}2\pi) / 2 \right) \right] \end{aligned} $$

If is obvious that any non-zero value of $g$ will add "energy" to the signal, thus the $g=0$ solution, corresponding to halfsies on the Nyquist bin, is the most natural solution, or lowest energy, out of a whole family of solutions of periodic bandlimited at $N/2$ functions.

The more significant convincer for me is that it also introduces imaginary values into what is other wise fully real set of weighting values.

Whether R B-J's series converges uniquely to this "natural" solution, or the "natural solution" is a unique solution (it is not) are two totally separate issues.


Olli, I hope this makes you smile.

Start with the discrete resampling formula for the odd $N$ case.

$$ y_m = \sum_{n=0}^{N-1} x[n] \left[ \frac{ \sin \left( N \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } { N \sin \left( \left( \frac{m}{M} - \frac{n}{N} \right) \pi \right) } \right] $$

Since the sequence of $N$ points is periodic $( x[n] = x[n+N] )$ and all the points are covered, we can shift the summation range to be zero centered.

$$ L = (N-1) / 2 $$

Also, the $m$th point can be located on the $n$ scale.

$$ w = m \frac{N}{M} = \frac{m}{M} N $$

Since the $M$ resampled points are evenly spaced along the cycle, they too can be arbitrarily shifted to be zero centered, though strictly that is not necessary.

Since "$t$" has already been used above, the scale of the domain of the continuous interpolation function, both will get new names. "$z(t)$" and "$Y(\omega)$" describe the same function. Plug all the defined values in.

$$ \begin{aligned} y_m = Y(w) &= \sum_{n=-L}^{L} x[n] \left[ \frac{ \sin \left( \left( w - n \right) \pi \right) } { N \sin \left( \frac{1}{N} \left( w - n \right) \pi \right) } \right] \\ &= \sum_{n=-L}^{L} x[n] \left[ \frac{\frac{\sin \left( \left( w - n \right) \pi \right)}{ \left( \omega - n \right) \pi }} {\frac{\sin \left( \frac{1}{N} \left( w - n \right) \pi \right)}{\frac{1}{N} \left( w - n \right) \pi }} \right] \\ &= \sum_{n=-L}^{L} x[n] \left[ \frac{\operatorname{sinc} \left( w - n \right) } {\operatorname{sinc} \left( \frac{1}{N} \left( w - n \right) \right)} \right] \\ \end{aligned} $$

Now it's time to take the big step, that is, big stroll out to infinity. The cycle of $N$ points grows until one cycle spans negative to positive infinity. As it gets bigger, the circular nature gets more remote.

$$ \begin{aligned} \lim_{N \to \infty} y_m &= \lim_{N \to \infty} Y(w) \\ &= \lim_{N \to \infty} \sum_{n=-L}^{L} x[n] \left[ \frac{\operatorname{sinc} \left( w - n \right) } {\operatorname{sinc} \left( \frac{1}{N} \left( w - n \right) \right)} \right] \\ &= \sum_{n=-\infty}^{\infty} x[n] \left[ \frac{\operatorname{sinc} \left( w - n \right) } {1} \right] \\ &= \sum_{n=-\infty}^{\infty} x[n] \operatorname{sinc} \left( w - n \right) \\ &= \sum_{n=-\infty}^{\infty} x[n] \operatorname{sinc} \left( \frac{Nm}{M} - n \right) \end{aligned} $$

Now look at that. The Whittaker–Shannon interpolation formula has been derived from scratch and we are right at your starting point.

The even case can be done similarly and ends up with the same formula.

  1. Definition of DFT of $N$ samples
  2. Inverse DFT used as Fourier Series Coefficients for interpolation function
  3. Dirichlet Kernel form of interpolation function
  4. Interpolation function used for $M$ samples
  5. Even and Odd Discrete Weighted Average Resampling Formulas
  6. N goes to infinity
  7. Whittaker–Shannon emerges
  8. Whittaker–Shannon applied to a repeating sequence of $N$
  9. Convergence questioned

I hope realizing using step 7 to achieve what step 2 has already answered will put a smile on R B-J as well. Your proof lies there.

For $ N = 2 $

$$ \begin{aligned} y_m &= \sum_{n=0}^{1} x[n] \left[ \frac{ \sin \left( 2 \left( \frac{m}{M} - \frac{n}{2} \right) \pi \right) } { 2 \sin \left( \left( \frac{m}{M} - \frac{n}{2} \right) \pi \right) } \right] \cos \left( \left( \frac{m}{M} - \frac{n}{2} \right) \pi \right) \\ &= \sum_{n=0}^{1} x[n] \cos^2 \left( \left( \frac{m}{M} - \frac{n}{2} \right) \pi \right) \\ &= x_0 \cos^2 \left( \frac{m}{M} \pi \right) + x_1 \sin^2 \left( \frac{m}{M} \pi \right) \end{aligned} $$

For $ x_0 = 1 $ and $ x_1 = -1 $

$$ \begin{aligned} y_m &= \cos^2 \left( \frac{m}{M} \pi \right) - \sin^2 \left( \frac{m}{M} \pi \right) \\ &= \cos \left( \frac{m}{M} 2 \pi \right) \end{aligned} $$

I'm going to have to be done with this for a while. Neat stuff.


Olli, thanks for the bounty points.

This little exercise has deepened my understanding of W-S considerably. I hope that is true for you and Robert (and others) too.

It is still a precarious foundation though. I wanted to convince myself that it would work for a sinusoid of any frequency. To wit:

$$ x[n] = M \cos( \alpha n + \phi ) $$

$$ \begin{aligned} x(t) &= \sum_{n=-\infty}^{\infty} x[n] \operatorname{sinc}(t-n) \\ &= \sum_{n=-\infty}^{\infty} M \cos( \alpha n + \phi ) \operatorname{sinc}(t-n) \\ &= \sum_{n=-\infty}^{\infty} M \cos( \alpha t + \phi - \alpha( t - n ) ) \operatorname{sinc}(t-n) \\ &= \sum_{n=-\infty}^{\infty} M \left[ \cos( \alpha t + \phi ) \cos( \alpha( t - n ) ) + \sin( \alpha t + \phi ) \sin( \alpha( t - n ) ) \right] \operatorname{sinc}(t-n) \\ &= M \cos( \alpha t + \phi ) \sum_{n=-\infty}^{\infty} \cos( \alpha( t - n ) ) \operatorname{sinc}(t-n) \\ & \qquad \qquad + M \sin( \alpha t + \phi ) \sum_{n=-\infty}^{\infty} \sin( \alpha( t - n ) ) \operatorname{sinc}(t-n) \\ &= M \cos( \alpha t + \phi ) \cos( \alpha( t - t ) ) + M \sin( \alpha t + \phi ) \sin( \alpha( t - t ) ) \\ &= M \cos( \alpha t + \phi ) \cdot 1 + M \sin( \alpha t + \phi ) \cdot 0 \\ &= M \cos( \alpha t + \phi ) \end{aligned} $$

I seem to have accomplished my goal. However, there is nothing in this proof that prohibits $\alpha \ge \pi$, though that is a condition for the validity of the theorem. So, knowing that, you are okay. If you didn't know that, the formula itself does not reveal it. To me, that's troubling.


Reply to R B-J:

First off, no where is it stipulated that $x[n]$ must be real. Even for a real valued function, you don't have to split the Nyquist bin halfsies to get a real interpolation function. Just pick $g$ to be a multiple of $i$ above.

Suppose you have the function:

$$ z(\tau) = \sum_{k=-L}^{L} c_k e^{ik\tau} $$

Its band limit is $L$ or less. Every k term, except 0, can be paired up with it's conjugate bin and the sum can be decomposed into a cosine and sine term.

let $ A = \frac{c_k + c_{-k}}{2} $ and $ B = \frac{c_k - c_{-k}}{2} $

$$ \begin{aligned} c_k e^{ik\tau} + c_{-k} e^{-ik\tau} &= (A+B) e^{ik\tau} + (A-B) e^{-ik\tau} \\ &= 2A \cos(\tau) - i 2B \sin(\tau) \end{aligned} $$

For a regular bin, we can only say $X[k] = c_k$ if $k+N>L$, otherwise I have more than one k in the bin and cannot separate them. At the Nyquist bin $X[k] = c_k + c_{-k}$

Think in terms of degrees of freedom. For a complex signal, $c_k + c_{-k}$ has four and the Nyquist bin two. Therefore there are two free. Just enough to put a complex parameter on the Sine function at the Nyquist frequency. With a real signal, $c_k + c_{-k}$ has two degrees of freedom and the Nyquist bin value restricts one of those leaving one left over. Just enough for a real valued parameter times the Sine function to remain a real valued signal.

I showed earlier the translation between not doing halfsies and the consequence on the interpolation function. Nothing prohibits that and it doesn't increase the bandwidth of the solution one iota.


R B-J asks:

// //"But we do know A will be zero in the halfsies and W-S reconstructions."// how do you know that? //

The halfsies is easy. Without loss of generality, consider the $N=2$ case.

$$ x[n] = [1,-1] $$

$$ \frac{1}{N} X[k] = [0,1] $$

Halfsies on the Nyquist of 1. Doing an unfurled inverse DFT with split Nyquist:

$$ x[n] = \frac{1}{2} e^{i\pi n} + \frac{1}{2} e^{-i\pi n} = \cos(\pi n) $$

Now allow $n$ to be real, call it $t$ to indicate the change. This defines an interpolation function (still called $x$).

$$ x(t) = \cos(\pi t) $$

For every other even N, the unnormalized DFT will be (0,0,0,....,N), so the result remains the same.

For the W-S summation, look at the section where "omega" temporarily lived, the "Sinc is the limit of the Dirichlet Kernel" section. The left side $y_m=Y(w)$ is known to be $ \cos( \pi w ) $. I even did the specific $N=2$ case after the dependency list. Just set "M=2" which makes $w = m$. The limit reached at the end of the second chunk gives your summation. Just reverse the order of the equation and you get:

$$ \sum_{n=-\infty}^{\infty} (-1)^n \operatorname{sinc}( w - n ) = \cos( \pi w ) $$

The fact that your summation is the limit of something is why proving it differently has been hard.

I think your time reversal argument is good, too. The sampled points are time reversible on the discrete $n$ scale, but that does not mean the source x(t) is, but it does mean Y(w) is.

P.S. From now on, when a fresh context can be established, I'm going to use $\tau$ for a $ 0 \to 2\pi $ cycle scale, $t$ to be on the sampling scale ($=n$).

Royi
  • 19,608
  • 4
  • 197
  • 238
Cedron Dawg
  • 7,560
  • 2
  • 9
  • 24
  • about $$ \lim_{x \to 0} \frac{\sin(x)}{x} = 1 $$ i remember this first proved to me with a geometric argument. remember $x$ is radians and radians is arc length. – robert bristow-johnson Jul 06 '19 at 02:17
  • @robertbristow-johnson Yeah, it's my favorite example to use to teach limits, especially to those who "aren't getting it". You can start with a drawing, then ask "Does this sentence make sense to you?" Then say: "The limit as the number of sides approaches infinity of the perimeter of the inscribed polygon is the circumference of the circle." Then write:

    $$ \lim_{n \to \infty } P_n = C $$

    Then you take it from there and end up at the $ sin(\theta)/ \theta$ limit

    – Cedron Dawg Jul 06 '19 at 02:36
  • Formally, you should also do the circumscribed polygon formula and sandwich the circumference between them. Then the guy from Real Analysis comes along and says, "Hey, you only proved it on the reciprocals of the integers." – Cedron Dawg Jul 06 '19 at 02:40
  • I don't think i ever said "non-bandlimited", Ced. i have always meant "bandlimited" (as well as real and periodic). if it were mathematically elegant, i would find it convenient that "bandlimited" mean "absolutely no energy at Nyquist or higher", but in general, the real and discrete-time periodic sequence $x[n]$ where $$x[n+N] = x[n] \qquad \forall n\in\mathbb{Z}$$ will not guarantee that there is no energy in the Nyquist bin (or in the DC bin). but if $x[n]$ is real, there is nothing imaginary in either of those DFT bins. – robert bristow-johnson Jul 07 '19 at 03:21
  • But if $N$ is even, then $x[n]$ can have a component at Nyquist that looks like {+1, -1, +1, -1, ... +1, -1}. So we cannot be guaranteed that $X\big[\frac{N}{2}\big]=0$ with an arbitrary real $x[n]$. What should we do with that, in this contingency? I think the only reasonable thing is to insist that the zero-crossing component (at Nyquist) is zero and the real part of $c_{N/2}$ is the same as for $c_{-N/2}$. It's still band-limited, just not quite as much as for odd $N$. – robert bristow-johnson Jul 07 '19 at 03:25
  • //" For a complex signal, the DFT is blind to the proper value. "// .... I certainly agree that the DFT is blind. But not a bandlimited complex signal. With the complex signal, the bandwidth, from left bandedge to right bandedge is the same and you need not center that window on any specific frequency. However, if you split the complex $z(t)$ into real and imaginary parts, $x(t)$ and $y(t)$ and insist that those two real functions are bandlimited to whatever extent you can bandlimit these real functions, then the Nyquist component has to be split for both $x(t)$ and $y(t)$. – robert bristow-johnson Jul 07 '19 at 03:56
  • and, BTW, for a double-sided summation with the DFT, your equation 6 is really only true for odd $N$. for real $x[n]$ and even $N$, the denominator on the left is $\tan(\cdot)$ rather than $\sin(\cdot)$. It's related to this. – robert bristow-johnson Jul 07 '19 at 04:10
  • @robertbristow-johnson You must have missed "(valid for N being odd)" just in front of (1) and "The former is true for any N, the latter varying on N even or odd." which uses that fact to further my larger argument that your conceptualization of the relationship between the continuous FT and the discrete FT is actually mathematically backward (like deriving the $sin(x)/x$ limit from L'Hopital), which is why you run into the kind of trouble that Olli posted this question about. It can be avoided by going in the right direction. Please reread what I've written until you understand that. – Cedron Dawg Jul 07 '19 at 12:07
  • The rearrangements of the terms of the sum, in my Eq. 4, are intended to demonstrate that the series converges conditionally. For a series that converges absolutely, equivalent rearrangements would have all given the same result. – Olli Niemitalo Jul 08 '19 at 17:28
  • //" *A series converges if and only if the sequence of partial sums converges.

    A sequence converges if and only if for any given $\epsilon$ there exist a delta so for every $k > \delta$ the absolute value of the difference of the limit and the sequence value is less than $\epsilon$ .* "// -----

    Ced we know that. it's the definition. but $\operatorname{sinc}(x)$ converges to 0 as with an envelope of $\frac{1}{\pi x}$ and we know a series of $\frac{1}{n}$ does not converge. but there is a sign alternation and we know that $\frac{(-1)^n}{n}$ does converge.

    – robert bristow-johnson Jul 08 '19 at 22:04
  • @OlliNiemitalo The finite discretely derived formulas are clearly easier to deal with. I hope that makes an impression on you. What I've learned from this is that halfsies on the Nyquist bin is the "natural" solution in the even case. – Cedron Dawg Jul 09 '19 at 22:33
  • Well, i am glad that we're in agreement that there is some ambiguity with the Nyquist bin. that is because (assuming the sampling period $T=1$) $$ x(t) = \sum_{n=0}^{N-1} x[n] , g(t-n) + A \sin(\pi t) $$ where $$ g(t) = \sum_{m=-\infty}^{\infty} \operatorname{sinc}(t - mN) $$ the ambiguity is that $A$ can be any real number and you won't change any of the samples of $x[n]$. But even so $$ x(t) = \sum_{n=-\infty}^{\infty}x[n] \operatorname{sinc}(t - n) \qquad x[n+N] = x[n] \forall n $$ adds up to something. i am pretty sure that what it adds to has $A=0$. – robert bristow-johnson Jul 10 '19 at 02:00
  • @robertbristow-johnson $A$ can be complex too.

    Don't know if you've seen these:

    https://dsp.stackexchange.com/questions/59316/,

    https://dsp.stackexchange.com/questions/59305/

    Also, in the Epilogue II section of the https://dsp.stackexchange.com/questions/59068/ I show that using the halfsies on the Nyquist bin in the $N=2$ case leads to

    $$ z(t) = cos(t) $$

    when

    $$ x[n] = (-1)^n $$ and

    $$ t = n \pi $$

    The proof relies on the definition of the DFT, the inverse DFT, finite geometrics series formula, and of course Euler's magnificient equation.

    – Cedron Dawg Jul 10 '19 at 02:40
  • no, $A$ is not complex. it is the imaginary part of either one of the Nyquist components, $c_{-N/2} = c_{N/2}$. if you can show that:

    $$x(t) = \sum_{n=-\infty}^{\infty} (-1)^n , \frac{\sin\big(\pi(t-n) \big)}{\pi(t-n)} = \cos(\pi t) + A \sin(\pi t) $$

    with $A=0$, i think you will have to make and argument that time reversing the $x[n]=(-1)^n$ sequence should result in reversing the $x(t)$ function, but $$ (-1)^n = (-1)^{-n}$$ doesn't change $x[n]$, then $x(t)$ should be unchanged by reversing and the only way to make that true is with $A=0$.

    – robert bristow-johnson Jul 10 '19 at 03:46
  • 1
    @robertbristow-johnson I have reversed your "omega" for "w" edits. "w" is a stand in for "t" on a different scale. "w" goes from 0 to N, and "t" (used earlier in the answer) goes from 0 to $\2pi" for each cycle. I didn't want to reuse "t" so I choose "w" (rather than "s") because it is an upside down "m" which it is based on. "omega" is usually used as a frequency variable and that make it more confusing. – Cedron Dawg Jul 15 '19 at 12:33
  • fine. i wasn't paying close attention. i would have used "$u$" or "$v$" as a substitute for "$t$". – robert bristow-johnson Jul 15 '19 at 19:03
  • @robertbristow-johnson Yeah, but $u$ is so often the unit step function and "v" so often velocity. It's too bad you glanced over that section, I think it's the most significant piece in all of this. It definitely shows the foundation of W-S which is firmly the Dirichlet Kernel. The Dirichlet Kernel being an infinite sum of sinc functions is a byproduct. I hope you paid some attention to the very last point about how the proof of unambiguous reconstruction of a pure tone using W-S doesn't impose the Nyquist limit. – Cedron Dawg Jul 15 '19 at 19:19
  • when $x[n]=(-1)^n$, how do you know it's not

    $$ x(t) = \cos(\pi t) + A \sin(\pi t) \quad ? $$

    how do you know that the sum

    $$ x(t) = \sum\limits_{n-\infty}^{\infty} x[n] \operatorname{sinc}(t-n) $$

    doesn't add up to the above with a non-zero $A$?

    the only way i know is that if you define

    $$ y[n] \triangleq x[-n] $$

    then $y(t) = x(-t)$. but $y[n]=x[-n]=x[n]$ because $$ (-1)^{-n} = (-1)^n \quad \forall n \in \mathbb{Z} $$ which means that $x(-t)$ must equal $x(t)$ in that case. That's what forces $A=0$.

    – robert bristow-johnson Jul 15 '19 at 23:39
  • @robertbristow-johnson You must have written this comment as I was extending my answer. If it still doesn't answer your question, how about you post it as a separate question as it doesn't have all that much to do with Olli's original question and this is getting cumbersome to edit. Honestly, if you reread my whole answer paying careful attention, it should settle all these issues you are raising. – Cedron Dawg Jul 15 '19 at 23:58
1

Some remarks. The series in Eq. 1 of the question:

$$y_m = \sum_{k=-\infty}^{\infty}\sum_{n=0}^{N-1}\operatorname{sinc}\left(\frac{Nm}{M} - n - Nk\right)x_n$$

explicitly means this (see this answer to the Mathematics Stack Exchange question: Notation of double-sided infinite sum):

$$\begin{align}y_m &= \lim_{K_2\to\infty}\lim_{K_1\to\infty}\sum_{k=-K_1}^{K_2}\sum_{n=0}^{N-1}\operatorname{sinc}\left(\frac{Nm}{M} - n - Nk\right)x_n\\ &= \lim_{K_1\to\infty}\lim_{K_2\to\infty}\sum_{k=-K_1}^{K_2}\sum_{n=0}^{N-1}\operatorname{sinc}\left(\frac{Nm}{M} - n - Nk\right)x_n,\end{align}\tag{1}$$

which is only a valid statement if all of those limits exist and the two definitions (with the limits in different order) are equal.

Cedron Dawg
  • 7,560
  • 2
  • 9
  • 24
Olli Niemitalo
  • 13,491
  • 1
  • 33
  • 61
  • So, I wasn't quite formal enough. In your example, for the odd case, it doesn't matter, as the condition of an alternating monotonically decreasing series holds independently on "both arms". For the even case, it's a little trickier as the series has to be rearranged by "folding it" to achieve the alternating monotonically decreasing series. In either case, I think the point is moot because the series is known to converge to a closed form expression as the result of taking the limit of the discrete case. Therefore you know it converges, and you know it doesn't converge absolutely. – Cedron Dawg Jul 13 '19 at 16:33
0

FYI: This was the question I put to the math guys, but here I changed the notation from what might be most conventional to the math guys to one that is more conventional to EEs. (I am using that post as a starting point to sorta exhaustively deal with Olli's question, but in mathematical terms that are easier for me to grok, so i am not exactly following Olli's math. This ain't done yet.)

This has to do with the Nyquist-Shannon sampling and reconstruction theorem and the so-called Whittaker–Shannon interpolation formula. I had previously asked an ancillary question about this here but this is about a specific nagging issue that seems to "periodically" crop up.

Let's begin with a periodic infinite sequence of real numbers, $x[n] \in\mathbb{R}$, having period $N>0\in\mathbb{Z}$. That is:

$$ x[n+N]=x[n] \qquad \forall \ n\in\mathbb{Z}. $$

So there are only $N$ unique values of $x[n]$.

Imagine these discrete (but ordered) samples as equally spaced on the real number line (with a sampling period of 1) and being interpolated (between integer $n$) as

$$x(t) = \sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}(t-n),$$

where

$$ \operatorname{sinc}(u) \triangleq \begin{cases} \dfrac{\sin(\pi u)}{\pi u} & \text{if } u \ne 0, \\ \ 1 & \text{if } u = 0. \end{cases} $$

Clearly $x(t)$ is periodic with the same period $N$:

$$ x(t+N) = x(t) \qquad \forall \ t \in \mathbb{R}. $$

All terms are bandlimited to a maximum frequency of $\frac{1}{2}$, so the summation is bandlimited to the same bandlimit. And, in any case, we have

$$ x(t) \Big|_{t = n} = x[n], $$

so the reconstruction works out exactly at the sampling instances.

$$\begin{align} x(t) &= \sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}(t-n) \\ &= \sum_{m=-\infty}^{\infty} \sum_{n=0}^{N-1} x[n+mN] \, \operatorname{sinc}\big(t - (n+mN)\big) \\ &= \sum_{m=-\infty}^{\infty} \sum_{n=0}^{N-1} x[n] \, \operatorname{sinc}\big(t - (n+mN)\big) \\ &= \sum_{n=0}^{N-1} x[n] \, \sum_{m=-\infty}^{\infty} \operatorname{sinc}\big(t - (n+mN)\big). \\ \end{align}$$

Substituting $u \triangleq t-n$ gives

$$ x(t) = \sum_{n=0}^{N-1} x[n] \, g(t-n), $$

where

$$ g(u) = \sum_{m=-\infty}^{\infty} \operatorname{sinc}(u-mN). $$

Clearly the continuous (and real) $g(u)$ is periodic with period $N$:

$$ g(u+N) = g(u) \qquad \forall u \in \mathbb{R}. $$

What is the closed-form expression for $g(u)$ in terms of $u$ and $N$?

I can extend the Discrete Fourier Transform (DFT) a little and relate it to the continuous Fourier series:

$$ X[k] \triangleq \sum_{n=0}^{N-1} x[n] e^{-j 2 \pi n k/N} $$

and

$$ x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k] e^{+j 2 \pi n k/N} $$

We know that both infinite sequences $x[n]$ and $X[k]$ are periodic with period $N$. This means that the samples of $x[n]$ or $X[k]$ can be any adjacent $N$ samples:

$$ X[k] \triangleq \sum_{n=n_0}^{n_0+N-1} x[n] e^{-j 2 \pi n k/N} \qquad \forall n_0 \in \mathbb{Z}$$

and

$$ x[n] = \frac{1}{N} \sum_{k=k_0}^{k_0+N-1} X[k] e^{+j 2 \pi n k/N} \qquad \forall k_0 \in \mathbb{Z} $$

Now, the continuous Fourier series for $x(t)$ is

$$ x(t) = \sum\limits_{k=-\infty}^{\infty} c_k \, e^{+j 2 \pi (k/N) t}, $$

and, because $x(t) \in \mathbb{R}$, we know we have conjugate symmetry

$$ c_{-k} = (c_k)^* \qquad \forall \ k \in \mathbb{Z}. $$

Being "bandlimited" means that

$$ c_k = 0 \qquad \forall \ |k| > \tfrac{N}{2}. $$

From this we know that

$$\begin{align} x(t) &= \sum\limits_{k=-\infty}^{\infty} c_k \, e^{+j 2 \pi (k/N) t} \\ \\ &= \sum\limits_{k=-\lfloor N/2 \rfloor}^{\lfloor N/2 \rfloor} c_k \, e^{+j 2 \pi (k/N) t} \\ \end{align}$$

where $\lfloor \cdot \rfloor$ is the floor() operator that essentially rounds down to the nearest integer. If $N$ is even $\lfloor \frac{N}{2} \rfloor = \frac{N}{2}$. If $N$ is odd $\lfloor \frac{N}{2} \rfloor = \frac{N-1}{2}$.

We could combine the $-k$ and $+k$ terms in the summation but we may have to subtract an extra $c_0$ because that term gets added twice with both summations.

For $N$ odd,

$$\begin{align} x(t)\bigg|_{t=n} &= \sum\limits_{k=-\infty}^{\infty} c_k \, e^{+j 2 \pi (k/N) n} \\ \\ &= \sum\limits_{k=-(N-1)/2}^{(N-1)/2} c_k \, e^{+j 2 \pi (k/N) n} \\ \\ &= \sum\limits_{k=-(N-1)/2}^{(N-1)/2} \tfrac{1}{N} X[k] \, e^{+j 2 \pi (k/N) n} \\ \\ &= \sum\limits_{k=-(N-1)/2}^{(N-1)/2} \tfrac{1}{N} \sum_{n=0}^{N-1} x[n] e^{-j 2 \pi n k/N} \, e^{+j 2 \pi (k/N) n} \\ \\ &= \tfrac{1}{N} \sum_{n=0}^{N-1} x[n] \sum\limits_{k=-(N-1)/2}^{(N-1)/2} e^{-j 2 \pi n k/N} \, e^{+j 2 \pi (k/N) n} \\ \\ \end{align}$$


For $N$ odd, we get the Dirichlet kernel:

$$ g(u) = \frac{\sin(\pi u)}{N \sin(\pi u/N)}. $$

But when $N$ is even, what should $g(u)$ be? Now there is potentially a non-zero component to the DFT value at what we EEs call the "Nyquist frequency"; namely $X[\tfrac{N}{2}]$ exists and might not be zero.

The expression for $g(u)$ I get when $N$ is even is

$$ g(u) = \frac{\sin(\pi u)}{N \tan(\pi u/N)}. $$

But the question is: can it be, in the case that $N$ is even, that

$$ x(t) = \sum_{n=0}^{N-1} x[n] \, g(t-n) + B \sin(\pi t),$$

where $B$ can be any real and finite number?


So my most concise question is: for $N$ even and $x[n] \in\mathbb{R}$ having period $N>0\in\mathbb{Z}$, namely

$$ x[n+N]=x[n] \qquad \forall \ n\in\mathbb{Z}, $$

is it true that

$$\sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}(t-n) = \sum_{n=0}^{N-1} x[n] \frac{\sin\big(\pi (t-n)\big)}{N \tan\big(\pi (t-n)/N\big)} $$

??


Another way of looking at the question is this special case. Can anyone prove that

$$\sum_{n=-\infty}^{\infty} (-1)^n \, \frac{\sin\big(\pi(t-n) \big)}{\pi(t-n)} = \cos(\pi t) $$

??

robert bristow-johnson
  • 20,661
  • 4
  • 38
  • 76
  • Even N case:

    $$ g( t - n )= \frac{ \sin \left( \left( t - n \right) \pi \right) } { N \sin \left( \frac{1}{N} \left( t - n \right) \pi \right) } $$

    Odd N case:

    $$ g( t - n )= \frac{ \sin \left( \left( t - n \right) \pi \right) } { N \tan \left( \frac{1}{N} \left( t - n \right) \pi \right) } $$

    See my answer for the derivations and more fun observations.

    – Cedron Dawg Jul 11 '19 at 05:33
  • But this is a more informative arrangement in the even case:

    $$ g( t - n )= \frac{ \sin \left( \left( t - n \right) \pi \right) } { N \sin \left( \frac{1}{N} \left( t - n \right) \pi \right) } \cos \left( \frac{1}{N} \left( t - n \right) \pi \right) $$

    Makes it look like a "window function" of the odd case.

    – Cedron Dawg Jul 11 '19 at 05:37
  • 1
    Oop, reverse the even and odd in the first comment. Sigh. – Cedron Dawg Jul 11 '19 at 05:47
  • we agree. that thing about factoring out the cosine is, of course true. i was aware of that here. dunno exactly how it's useful. well, anyway, i haven't finished this answer (and it will be rearranged, i just wanted to copy what i wrote in the math.se and change it to EE nomenclature. more to come. – robert bristow-johnson Jul 11 '19 at 20:44
  • Did you know that when two tones that are close in frequency are added together there is a beat phenomenon? The sum appears to be a tone at the average of the two source frequencies attenuated by a cosine function envelope. The proof is easy, comes straight from the angle addition formulas. – Cedron Dawg Jul 12 '19 at 03:20
  • "Did you know that when two tones that are close in frequency are added together there is a beat phenomenon?" --- yes. – robert bristow-johnson Jul 12 '19 at 03:33
  • "The sum appears to be a tone at the average of the two source frequencies attenuated by a cosine function envelope." --- the sum of what? – robert bristow-johnson Jul 12 '19 at 03:34
  • olli, you should give the bounty to Ced before it expires. i'm not gonna be done with this in time. – robert bristow-johnson Jul 12 '19 at 03:36
  • i'm not done with this. – robert bristow-johnson Jul 12 '19 at 06:25
  • I wonder if I went though this on comp.dsp because it's not as clean as I remembered it to be. – robert bristow-johnson Jul 12 '19 at 09:16
  • @OlliNiemitalo, The bottom line is what you guys consider foundational (your starting point), the Whittaker–Shannon interpolation formula, is actually the end result and therefore cannot be used to prove any of its premises. Confirm, yes, prove, no. – Cedron Dawg Jul 12 '19 at 12:41
  • On the "unambiguous reconstruction" digression: Since the interpolation function of the Whittaker–Shannon formula is the limit of the interpolation functions you get when you take the DFT, zero pad at the Nyquist (splitting the Nyquist bin in even cases), and taking the inverse DFT as the DFT size goes to infinity, it means that W-S is just as blind to the Sine component at the Nyquist, and just as incapable of reproducing it. – Cedron Dawg Jul 12 '19 at 14:16
  • i will finish this someday. i just had to re-get a grip on the "halfsies" argument. the fact that the real part of the Nyquist component must be split in half for the positive and negative frequency components is clear. but i had a little trouble justifying explicitly why the reconstruction cannot have two opposite imaginary parts of the positive and negative Nyquist component that add to zero. but with reversing the discrete samples in time, i think i can make a case. – robert bristow-johnson Jul 15 '19 at 05:43
  • Your use of the word "must" means you haven't grasped it yet. There is nothing mathematically mandatory saying that you have to use halfsies to construct an interpolation function that is bandlimited at Nyquist. The halfsies selection is a special case (it could even be called optimal or "natural") out of a family of possiblities. It also yields the same interpolation function that the W-S infinite series formula does. That is all there is to it. – Cedron Dawg Jul 15 '19 at 12:37
  • no Ced, i have grasped it. the real parts of $c_{N/2}$ and $c_{-N/2}$ *must* be equal (because $x(t)$ is real) and they *must* add to the single and real $\frac1N X[\frac{N}2]$ DFT value. and the imaginary parts of $c_{N/2}$ and $c_{-N/2}$ *must* be negatives of each other (again because $x(t)$ is real). but the only reason i can think of for why the imaginary parts of $c_{N/2}$ and $c_{-N/2}$ must be zero is that if you time reverse $x[n]$, that should result in the time reversal of $x(t)$. Ced, i ain't gonna patronize you, i might ask for the same. – robert bristow-johnson Jul 15 '19 at 19:24
  • My reply got longer than intended, so I added it to my answer. What you are saying is still wrong. Your summation works, even though it doesn't meet W-S criteria, precisely because of the section you didn't really pay attention to. There is nothing that says halfsies are required either in the real or complex case. Doing halfsies, and only that, will get you the same function as the W-S interpolation, but it is not a unique solution at the Nyquist bandwidth. Yes, a time reversal argument forces the sine coefficient to be zero, but that doesn't tell you anything new.. – Cedron Dawg Jul 15 '19 at 20:07
  • //" There is nothing that says halfsies are required either in the real or complex case. "// ----- that's a falsehood. for real $x(t)$, then the samples $x[n]$ are also real. for real $x(t)$ and $x[n]$, Hermitian symmetry applies. that means $c_{-k}=c_k^$ and $X[N-k] = X[k]^$. the consequence of sampling means the real parts of $c_{-N/2}$ and $c_{N/2}$ must add to the real part of $\frac1N X[\frac{N}2]$ which might not be zero and the imaginary parts of parts of $c_{-N/2}$ and $c_{N/2}$ must add to the imaginary part of $\frac1N X[\frac{N}2]$ which must be zero. – robert bristow-johnson Jul 15 '19 at 20:19
  • I thought we were talking about having an unambiguous reconstruction of $x(t)$, not bin symmetry. The W-S interpolation will give a unique reconstruction, but there is no guarantee that it matches the original $x(t)$, that is, you can't say it is unambiguous. The W-S interpolation matches the member of the family of possible solutions corresponding to the halfsies. It is the limit of the halfsies solution as N goes to infinity. This is true for the complex case too. – Cedron Dawg Jul 15 '19 at 20:34
  • yes, we are talking about having an unambiguous reconstruction of $x(t)$. so when $x[n]=(-1)^n$ then $$x(t) = \cos(\pi t) + A \sin(\pi t) $$ so how do we know that $A=0$? how do we know that the imaginary parts of $c_{-N/2}$ and $c_{N/2}$ must be zero? we know they have to add to be zero, but how do we know that they are both zero? – robert bristow-johnson Jul 15 '19 at 20:41
  • We don't, that's the point. But we do know A will be zero in the halfsies and W-S reconstructions. Thus the original function can not be unambiguously reconstructed. (No thanks to the chat move). Yet all the functions in the family of possible solutions have the same bandwidth limit. – Cedron Dawg Jul 15 '19 at 20:48
  • //"But we do know $A$ will be zero in the halfsies and W-S reconstructions."// how do you know that? all we know is that $$c_{-k}=c_k^* \qquad \forall k \in \mathbb{Z}$$ and $$ c_{-N/2}+c_{N/2}=\tfrac1N X[\tfrac{N}2] \ .$$ without making a time-reverse argument, you do not know that the imaginary part of $c_{N/2}$ is zero. but we do know the real parts of $c_{-N/2}$ and $c_{N/2}$ are equal and add to $\tfrac1N X[\tfrac{N}2] $. – robert bristow-johnson Jul 15 '19 at 21:37