1

Has anyone seen this trick before?

Let's say I'm working with a real pure tone signal that's dangerously close to Nyquist. So, I want to upsample it by a factor of two to move it near the four samples per cycle range. I happen to know the frequency to about 4 or 5 significant digits, so I figured there must be a way to interpolate this without having to do a huge sinc interpolation.

Here's what I came up with. The code explains the math the best (procedurally, but not conceptually or contextually):

import numpy as np

#============================================================================= def main():

    N = 7

    M     = 1.234
    alpha = 2.95
    phi   = 2.345

    print " Samples per cycle:",   2.0 * np.pi / alpha
    print "Percent of Nyquist:", 100.0 * alpha / np.pi
    print 

    whoops = 1.001

    factor1 = np.cos( 0.5 * ( alpha * whoops ) )
    factor2 = 0.25 / factor1

    x  = np.zeros( N )
    y  = np.zeros( 2 * N )

    for n in range( N ):
      x[n] = M * np.cos( alpha * n + phi )

    d = 2
    for n in range( 1, N-1 ):
      y[d]   = x[n]
      y[d+1] = x[n] * factor1 + ( x[n+1] - x[n-1] ) * factor2
      d += 2  

    for d in range( 2 * N ):
      s = M * np.cos( alpha/2.0 * d + phi )
      print " %2d %10.6f  %10.6f  %10.6f" % \
            ( d, y[d], s, y[d] - s  )

#============================================================================= main()

Now, generally my whoops is a lot smaller than the one in the code, but I put it in there to get a feel for the behavior of this approach. Starting in the four samples per cycle range makes the impact of the "whoops" a lot smaller.

 Samples per cycle: 2.12989332447
Percent of Nyquist: 93.9014164242

0 0.000000 -0.862747 0.862747 1 0.000000 -0.960759 0.960759 2 0.678954 0.678954 0.000000 3 1.105637 1.090643 0.014994 4 -0.470315 -0.470315 0.000000 5 -1.197629 -1.180614 -0.017014 6 0.244463 0.244463 0.000000 7 1.245792 1.227380 0.018412 8 -0.009666 -0.009666 0.000000 9 -1.248365 -1.229229 -0.019136 10 -0.225485 -0.225485 0.000000 11 1.205253 1.186094 0.019159 12 0.000000 0.452385 -0.452385 13 0.000000 -1.099553 1.099553

Works well enough for me. I don't think I've ever seen anything like it, but that doesn't necessarily mean a lot.


I've extended the same technique to the triple case, which means 3/2 can be done really cheap too because I have no use for tripling.

Yes, it does look sort of like a Taylor approximation, but it is indeed exact when whoops is one.


Update:

If I am the first to find this trick, I want to claim that and write it up properly. Otherwise, it's move along folks, nothing to see here. This will work at all frequencies up to Nyquist.

Near Nyquist, or for other upscaling factors (U) use:

       fudge = ( alpha * whoops )
   slice = fudge / U

   factor1 = np.cos( slice )
   factor2 = np.sin( slice ) / ( 2.0 * np.sin( fudge ) )

Note that the double angle formula for Sines lets me save a $\cos^{-1}$ and $\sin$ calculations, as is done in the code above. I get $\cos(\alpha)$ as a result of my frequency formulas. Which is very convenient.

For the $U=3$ case:

        d = 3
        for n in range( 1, N-1 ):
          stunted =   x[n]              * factor1
          differ  = ( x[n+1] - x[n-1] ) * factor2
      y[d-1] = stunted - differ
      y[d]   = x[n]
      y[d+1] = stunted + differ

      d += 3

You can unroll this loop by a factor of 2 for efficient 3/2 upsampling.

I can't think of a better forum to reach the experts in this field to tell be if this has been done before. Clearly it is not well known or somebody would have answered already.


As per RB-J implicit request, the conceptual math version:

$$ x[n] = M \cos( \alpha n + \phi ) $$

$$ \begin{aligned} x[n+d] &= M \cos( \alpha ( n + d ) + \phi ) \\ &= x[n] \cos( \alpha d ) - M \sin( \alpha n + \phi ) \sin( \alpha d ) \\ \end{aligned} $$

$$ \begin{aligned} x[n-1] &= x[n] \cos( \alpha ) + M \sin( \alpha n + \phi ) \sin( \alpha ) \\ x[n+1] &= x[n] \cos( \alpha ) - M \sin( \alpha n + \phi ) \sin( \alpha ) \\ \end{aligned} $$

$$ \frac{ x[n+1] - x[n-1] }{ 2 \sin( \alpha ) } = -M \sin( \alpha n + \phi ) $$

$$ \begin{aligned} x[n+1/U] &= x[n] \cos\left( \alpha \frac{1}{U} \right) - M \sin( \alpha n + \phi ) \sin\left( \alpha \frac{1}{U} \right) \\ &= x[n] \cos\left( \alpha \frac{1}{U} \right) + \frac{ x[n+1] - x[n-1] }{ 2 \sin( \alpha ) } \sin\left( \alpha \frac{1}{U} \right) \\ &= x[n] \left[ \cos\left( \alpha \frac{1}{U} \right) \right] + ( x[n+1] - x[n-1] ) \left[ \frac{ \sin\left( \alpha \frac{1}{U} \right) }{ 2 \sin( \alpha ) } \right] \\ \end{aligned} $$

$$ \frac{ \sin\left( \alpha \frac{1}{2} \right) }{ 2 \sin( \alpha ) } = \frac{ \sin\left( \alpha \frac{1}{2} \right) }{ 4 \sin( \alpha \frac{1}{2} ) \cos( \alpha \frac{1}{2} ) } = \frac{ 1 }{ 4 \cos( \alpha \frac{1}{2} ) } $$

Quite Easily Done.

I stand by the post title.

Cedron Dawg
  • 7,560
  • 2
  • 9
  • 24
  • 1
    What is the use case? Seems kind of unusual to have a perfect sine of nearly known frequency at close to Nyquist and want to upsample it? – Knut Inge Jul 22 '20 at 08:32
  • ...or, setup an oscillator in the "fast frame of reference" (at the higher Fs) and "lock it" to the frequency of the "slow frame of reference" (at the lower Fs). Each runs at their own factor, but the higher Fs one gets an extra one depending on its phase difference. It doesn't sound like you want to do tracking, so once the phase difference has stabilised you take that error factor and run the faster oscilator from $n=0$ (?) – A_A Jul 22 '20 at 08:34
  • @KnutInge C'mon Knut, how often do we find a cool formula in theoretical Math, and an application later? This is based on the angle addition formula like my solution here https://dsp.stackexchange.com/questions/69285/optimization-of-harmonics-calculation/69287#69287. Very similar. However, that isn't the case here though, I have a very special need for this which is why I went looking. I'm keeping it tight to my chest for now. I just hadn't seen it before, and not being an expert in filters, (this is like a FIR), wondered if it was known and published. – Cedron Dawg Jul 22 '20 at 11:55
  • @A_A It is important for me to preserve the local phase and amplitude envelope. This gives me a linear interpolation of each in case they are slightly varying. A lot more efficient than any kind of tracking. It's on a second pass of the data, the first is one of these https://www.dsprelated.com/showarticle/1051.php, or those from the next two articles. – Cedron Dawg Jul 22 '20 at 11:59
  • @A_A I might have misunderstood your comment. Yes, I have multiplied my signal by $e^{i(\pi-\alpha)n}$ (oscillator) which pushes one of the components to Nyquist where I kill it with a (1,2,1) smoothing. My ultimate goal is to get the signal as a pure complex tone with the same parameters locally. I believe this approach, combined with the FIR I am using is mathematically equivalent yet significantly computationally more efficient. – Cedron Dawg Jul 22 '20 at 13:49
  • @KnutInge You can also think of this sort of as building a band pass filter into the upsampling. This is why I think this should already be known. – Cedron Dawg Jul 22 '20 at 13:51
  • 1
    //"The code explains the math the best"// No it doesn't. – robert bristow-johnson Jul 22 '20 at 18:32
  • @robertbristow-johnson I modified the offending phrase. Been anticipating your reply. Dan has done a good job of putting this into a "Filter Framework" and convinced me I have found an optimized solution to a special case. So, R B-J, ever seen this optimized solution in an application and do you understand that it is derived solely based on a trig identity? – Cedron Dawg Jul 22 '20 at 18:45
  • Ced, first you upsample $x[n]$ by a factor of 2x by zero inserting between every original sample of $x[n]$. that doubles the length, but you have an image to whack. then you filter with an FIR filter that has coefficients of what Dan sez. even though it's not symmetrical (so it will have phase shift), it can be shown to be a windowed sinc. back in the day, we (at Wave Mechanics, now SoundToys) had little synchronous sample rate conversion widgets. i had found that if the upsample ratio was 2x, it was easier to just copy one sample over and compute the "in-between" sample. but not so for 4x. – robert bristow-johnson Jul 22 '20 at 19:14
  • What if I don't want to "whack the image", but keep the interpolation strictly real? (I do later, but that's beside the point). Dan's coefficients can be slid one sample, that doesn't phase me (pun intended). I am keeping each original and only calculating new points, that's part of the accuracy/efficiency. Dan's coefficients are familiar to me in different contexts. – Cedron Dawg Jul 22 '20 at 19:22
  • 1
    i'm just saying that, if your SRC ratio is exactly 2x, and if you are copying one input sample over exactly and computing the in-between samples with a linear combination of the input samples, then what you're doing is equivalent to a windowed-sinc. that is true even if what you are doing is polynomial interpolation such as Lagrange or Hermite or B-spline. but if you are compute both output samples (per input sample) and not just copying one over, then you can use a better LPF designed using Parks-McC. but then you lose the "efficiency" of just copying over an input sample. – robert bristow-johnson Jul 22 '20 at 19:36
  • Right, but it is the specific linear combination that is mathematically exact for the targeted pure tone. In the application that prompted this, efficiency is key. In general, this moves my real valued signal into the ideal samples per cycle (4) range for my exact real valued DFT frequency formulas. Dan's filter in this range will not have the high side values as shown in his diagram. Those are due to being near Nyquist, which is why I called it dangerous. This is not general interpolation (band limited sinc style), but targeted for a given frequency. – Cedron Dawg Jul 22 '20 at 19:40

2 Answers2

5

This appears functionally identical to the traditional interpolation approach of zero-insert and filtering (and in the form of a polyphase interpolator would be identical in processing as detailed even further below), in this case the OP's filter is a 5 tap filter with one coefficient zeroed:

$$coeff = [factor2, 1, factor1, 0, -factor2]$$

Below is the most straight-forward simulation of this using Python and Numpy as the OP has done but showing more directly how the same mathematics is derived directly from the zero insert and filter approach (this isn't yet the polyphase approach that would be essentially the OP's processing but given to more clearly shows how the traditional interpolation is exactly what is happening):

Note the OP has chosen to do convolution (the np.convolve line below) using a for loop while here we take advantage of the vector processing Numpy offers. The convolve line below could just as well be the same for loop; the point is to show that functionally the OP is doing traditional interpolation, and this is not a new approach or more efficient than what is typically done - in direct answer to the OP's question. With such an application of being limited to a single tone the interpolation filter is greatly simplified (and thus can be done with very few coefficients) since the null is at a narrow location.

N = 7
M     = 1.234
alpha = 2.95
phi   = 2.345
whoops = 1.001

factor1 = np.cos(0.5 * (alpha * whoops)) factor2 = 0.25 / factor1 x = M * np.cos(alpha * np.arange(N) + phi) xo = np.zeros(2 * N)
xo[1::2] = x # zero insert coeff = [factor2, 1, factor1, 0, -factor2]; y = np.convolve(coeff,xo) # interpolation filter

print results

result = zip(xo, y) print(f"{'d':<10}{'xo':>10}{'y':>10}") for d, item in enumerate(result): print("{:<10}{:>10.6f}{:>10.6f}".format(d, *item))

With the following results, the OP's sample starting at d=2 matches the results below starting at d=4

d                 xo         y
0           0.000000  0.000000
1          -0.862747 -2.290118
2           0.000000 -0.862747
3           0.678954  1.720994
4           0.000000  0.678954
5          -0.470315  1.105637
6           0.000000 -0.470315
7           0.244463 -1.197629
8           0.000000  0.244463
9          -0.009666  1.245792
10          0.000000 -0.009666
11         -0.225485 -1.248365
12          0.000000 -0.225485
13          0.452385  1.205253

Further we see that the efficient implementation of this same filter as a polyphase filter (as is typically done in interpolation structures - see How to implement Polyphase filter?) results in the exact same process done by the OP:

The polyphase structure maps filter rows to columns such that the original filter [f2, 1, f1, 0, -f2] maps to the polyphase filters with coefficients [f2, f1, -f2] and [1, 0, 0] as shown in the filter diagrams below (the slashed lines represent multiplications by the fixed coefficient shown):

polyphase interpolator

The upper 3-tap polyphase filter as an asymmetric linear phase FIR is usually implemented efficiently as shown below, eliminating one of the multipliers:

efficient structure

The above block diagram is precisely the approach in the OP's code in the for next loop.

Knowing this we can look at the frequency response of this filter to assess the overall quality of the interpolator, knowing that the ideal interpolator would pass the signal of interest and completely reject the images due to the zero-insert (for more details on that specifically see: Choosing right cut-off frequency for a LP filter in upsampler )

Below is the frequency response for the filter coefficients chosen by the OP, showing that the image is at the null of the filter, rejecting it completely. However we also see other qualities that make this filter less than ideal, specifically the high amplitude sensitivity to the frequency of interest, suggesting a high sensitivity to any variation in the frequency and making this use very limited to one specific tone. Additionally the higher gain at other locations that is higher than the signal of interest (in this case by +15 dB) is often not desired in practical applications due to noise enhancement at those frequencies. Also observe as we approach Nyquist, the image frequency being rejected will get arbitrarily close and the relatively wide resonance of thise filter would result in significant signal attenuation (which is no issue if there is no noise involved and floating point precision). Using traditional filter design techniques targeting flatness over the desired operating frequency range and maximum rejection a better filter could likely be achieved with the same number of processing resources.

Frequency Response

If our signal is essentially a noise free tone such that we aren't concerned with the amplitude slope versus frequency variation as the OP's case, then we can ask if there is an even more efficient filter to provide a null at any given frequency with less number of taps. Assuming we want to limit this to real taps, and we're equally not concerned with amplitude variation versus frequency, and higher gain at other frequency locations, we can design this directly and simply by placing complex conjugate zeros on the unit circle in the z-plane (which is the frequency axis) at any frequency of choice. This results in an even simpler 3 tap second order FIR response. For example in the OP's case the signal of interest is at radian frequency $\alpha/2$ and the null therefore would need to be at null is at $\pi-\alpha/2$.

The optimized 3 tap filter would then have zeros on the unit circle at $ e^{\pm j(\pi-\alpha/2)}$:

Resulting in the following 3-tap filter solution. Given the first and third coefficient is 1, only one real multiplier and two adds would be required! The gain can be adjusted with simple bit shifting in increments of 6 dB. (This is still nothing new and just the filter design selection for the interpolation filter, in this case showing what can be done with a 3-tap symmetric FIR filter.)

$$H(z) = (z-e^{- j(\pi-\alpha/2)})(z-e^{+ j(\pi-\alpha/2)}) $$ $$ = z^2 - 2\cos(\pi-\alpha/2)z + 1$$

Which for the OP's frequency results in coefficients [1, 0.1913, 1].

Which too could be done in the polyphase approach or more specifically in the same for loop structure as the OP for direct comparison with factor1 as 0.1913 and more generically $2\cos(\pi-\alpha/2)$:

d = 2
for n in range( 1, N-1 ):
   y[d]   = x[n+1] - x[n-1]
   y[d+1] = x[n] * factor1
   d += 2 

The following shows the frequency response with the signal scaled by 5 for direct comparison to the OP's filter: Frequency response of 3 tap solution

The above with the 3 tap approach demonstrates what would be a highly efficient up-sampling approach given it can be done with one multiplier. Given applications where there is concern with the effects of noise enhancement I would also consider at the expense of an additional multiplier a simple 2nd order IIR notch filter as the interpolation filter. This requires three real multiplications and with the tighter filtering will have a longer start-up transient but offers a very flat response over the entire frequency band other than the area being rejected and a very tight notch up to the amount of precision that can be utilized by adjusting parameter $\gamma$. (see Transfer function of second order notch filter where in that post $\alpha$ was used but here I will change it to $\gamma$ since the OP used $\alpha$ to denote frequency), which results in the following implementation:

2nd order IIR implementation

The frequency response where an $\gamma$ of 0.95 was used (higher $\gamma$ means tighter notch) is shown below. This filter will have superior gain flatness compared to any of the previous filters as the signal gets closer to Nyquist which may be of interest if there is any concern related to dynamic range and the noise floor. Note how the signal level falls drastically for the other filters as the signal gets closer to Nyquist while with this filter we have the ability to achieve tighter notches up to the allowable precision used (this notch shown was with $\gamma$ or only 0.95—— we could easily do 0.999 with no issue!)

2nd order notch filter

Even better for this application specifically if one is going to go down the 2nd order IIR path is to place the pole close to the location of the tone of interest instead of close to the zero as done in the linked post. This will provide for peaking selectively at the interpolated tone specifically. This is easily derived as follows:

zero locations: $ e^{\pm j(\pi-\alpha/2)}$

pole locations: $ \gamma e^{\pm j(\alpha/2)}$

Resulting in the filter as follows:

$$H(z) = \frac{(z-e^{j(\pi-\alpha/2)})(z-e^{-j(\pi-\alpha/2)})}{(z-\gamma e^{j(\alpha/2)})(z-\gamma e^{-j(\alpha/2)})}$$

Which readily reduces to the following by multiplying out the terms and using Euler's formula:

$$H(z) = \frac{z^2 -2\cos(pi-\alpha/2)+1}{z^2- 2\gamma\cos(\alpha/2)+ \gamma^2}$$

This maps to the following implementation:

2nd order peaking

With the following response using $\gamma$ = 0.95 here as well:

2nd order with peaking freq response

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • Hey Dan, thanks. I look at this more carefully, but initial assessment is similar, not identical. My interpolation values are strictly dependent on the two neighbors and not beyond that. I would also estimate that your computation load is higher by a significant factor. – Cedron Dawg Jul 22 '20 at 14:27
  • 1
    Yes that is the same since the coefficients are zero. It is identical. The result of each processsing in your for loop is simply the convolution I am doing in the filter. There is no increase in computational burden--it's the same math and same functional operation, you can get the result many different ways. So it is just coming down to having a more efficient way of doing convolution-- it is still convolution (interpolation filtering). (but why do for loops when you are using Numpy?) – Dan Boschen Jul 22 '20 at 14:28
  • You're right. Looking closer the interpolated numbers to match. So now you know a much quicker way to do the same thing. Much quicker. Old habits. For me the code is a documentation of the process as much as it is a computational tool. – Cedron Dawg Jul 22 '20 at 14:32
  • 1
    I don't see it as being much quicker than using conv? But doing convolution with a for loop isn't anything new is it? Essentially the task is to come up with the optimum coefficients to pass the signal of interest and reject the images (for any waveform and any interpolation factor). So in this case if you wanted to limit it to 4 non-zero coefficients, you can optimize those for any specific case using traditional filter design approaches - once you have the coefficients you simply convolve them with your zero-inserted waveform. – Dan Boschen Jul 22 '20 at 14:34
  • Does not your convolution require four multiplies per point then a sum? And you are doing a function call. (I'm thinking of a low level C implementation, Python for prototyping. Since you gave me code, I can time it. Wager? – Cedron Dawg Jul 22 '20 at 14:38
  • Well, the FFT does what a DFT does functionally too. Nah, I don't like the chat rooms here. You can email me if you like. – Cedron Dawg Jul 22 '20 at 14:41
  • That comes up automatically as we're not supposed to chit chat in the comments. I thought your question was if there was anything novel about the interpolation you were doing or if this has been done before and my response was this is exactly how interpolation is usually done-- at the mathematical level (not coding level) it is the same operations as zero-insert and filter. One can do filtering with a for loop or direct convolution depending on application--- if simulating in Python and using Numpy the vector operations are usually preferred but didn't see that as the main point. – Dan Boschen Jul 22 '20 at 14:44
  • My numpy reading skills are not that good (+ I am lazy). But are we converging on you having implemented the mathematical equivalent of zero insertion -> convolution, implemented using multirate techniques that avoid redundant arithmetics? If so, that is a known approach. Optimizing the filter response when you know that your input signal occupies a very narrow band, and that aliasing will mirror to a nearby frequency is an interesting problem. – Knut Inge Jul 22 '20 at 14:48
  • @DanBoschen , et al, I need to study this more before I should say anything more. I would love an Olli analysis. – Cedron Dawg Jul 22 '20 at 14:50
  • @CedronDawg if you think it will improve the answer I can change the convolve line to be the same for loop for convolution as you have. Just to show it isn't anything different. I am not suggesting a faster approach, all I am saying is that is exactly what you are doing. – Dan Boschen Jul 22 '20 at 14:50
  • What happens if you frequency shift that «fs/2 - delta» to approximayelu 0Hz? Then you would have a very slowly varying waveform that might be interpolated using linear interpolation for good results. – Knut Inge Jul 22 '20 at 14:51
  • @DanBoschen Inlining code is an optimization. Could you get it down to two multiplies, and two adds(add/sub) per point? – Cedron Dawg Jul 22 '20 at 14:52
  • @KnutInge This is a real valued signal, so rotating one component to 0 Hz still leaves the other component so linear interpolation won't work. For a complex tone it is a trivial problem as I see it. I did the equivalent to Nyquist (see my comment to A_A). This seems to be a more efficient pathway. – Cedron Dawg Jul 22 '20 at 14:56
  • I am working on understanding everything you are saying, and your approach to them. Just as here https://dsp.stackexchange.com/questions/69285/optimization-of-harmonics-calculation/69287#69287, the solution I gave was mechanically equivalent to the OPs method, the conceptualization and derivations are different. As far as I can tell, I have implemented the most optimized version of this algorithm possible. That it is well understood in a larger equivalent context is a comfort, and a learning opportunity. But my real question is has anybody seen this trick (optimized implementation) before? – Cedron Dawg Jul 22 '20 at 16:48
  • @CedronDawg My answer is yes- this is simply a polyphase interpolator. What you show is identical to that. I updated my answer to make that clearer. Plus I gave you a solution that only requires one multiplier! See my update. – Dan Boschen Jul 22 '20 at 16:51
  • Thank you, you have given me a lot to chew over. So, can you see this as a simple application of the cosine angle addition formula? – Cedron Dawg Jul 22 '20 at 16:57
  • @CedronDawg Your welcome, I hope I properly answered your question and you found it helpful / useful. – Dan Boschen Jul 22 '20 at 16:59
1

Error analysis:

Suppose you get the frequency wrong by a little bit. What is the impact?

Every other point is the original signal, so no error introduced there. Therefore, the errors will occur at the newly inserted points.

Define the signal being upsampled as:

$$ y[n] = A \cos( \theta n + \rho ) $$

This relation still holds:

$$ y[n+1] - y[n-1] = - 2 A \sin( \theta n + \rho ) \sin( \theta ) $$

First, calculate the new value using the interpolation formula.

$$ \begin{aligned} y_x[n+1/2] &= A \cos( \theta n + \rho ) \left[ \cos\left( \alpha \frac{1}{2} \right) \right] \\ &+ \left( - 2 A \sin( \theta n + \rho ) \sin( \theta ) \right) \left[ \frac{ \sin\left( \alpha \frac{1}{2} \right) }{ 2 \sin( \alpha ) } \right] \\ \end{aligned} $$

Second, calculate what the value really should be.

$$ \begin{aligned} y[n+1/2] &= A \cos( \theta n + \rho ) \cos\left( \theta \frac{1}{2} \right) \\ & - A \sin( \theta n + \rho ) \sin\left( \theta \frac{1}{2} \right) \\ \end{aligned} $$

Subtract one from the other to get the error. It's messy, so introduce two new constants (relative to $n$).

$$ y_x[n+1/2] - y[n+1/2] = A \left[ C_1 \cos( \theta n + \rho ) - C_2 \sin( \theta n + \rho ) \right] $$

The first thing to notice is that the error value is a possibly shifted, possibly resized version of the input signal, but it only occurs at the half way points. Zero in between. (Kind of inherent zero padding in a sense.)

The values of the constants are as follows.

$$ \begin{aligned} C_1 &= \cos\left( \frac{\alpha}{2} \right) - \cos\left( \frac{\theta}{2} \right) \\ C_2 &= \frac{ \sin( \theta ) }{ \sin( \alpha ) } \sin\left( \frac{\alpha}{2} \right) - \sin\left( \frac{\theta}{2} \right) \\ &= \sin( \theta ) \left[\frac{ \sin\left( \frac{\alpha}{2} \right) }{ \sin( \alpha ) } - \frac{ \sin\left( \frac{\theta}{2} \right) }{\sin( \theta )} \right] \\ &= \frac{\sin( \theta )}{2} \left[ \frac{ 1 }{ \cos( \frac{\alpha}{2} ) } - \frac{ 1 }{ \cos( \frac{\theta}{2} ) } \right] \\ \end{aligned} $$

The second thing to notice is that if $\alpha = \theta$, the $C$ values will be zero and there is no error.

When $\alpha$ and $\theta$ are close to Nyquist, the cosine of half their angle is near zero. Therefore $C_1$ is nothing to worry about, but $C_2$ can be costly. The closer to Nyquist, the costlier it can be.

On the other hand, if $\alpha$ and $\theta$ are small, the cosines of their half angles approach one. This makes both $C_1$ and $C_2$ small, so there isn't a large cost on being inaccurate.

Looking through the comments again, which I really appreciate, I think Knut and A_A touched on what seems to be a better solution. Most of you will probably think "Duh, you should have done that in the first place.", which being blinded by finding a new technique, I didn't see.

If we spin up the signal by Nyquist, aka $(-1)^n$, aka $ e^{i\pi n}$, aka flipping the sign of every other sample, the signal becomes (because of aliasing)

$$ x[n] = M \cos( (\pi - \alpha) n - \phi ) $$

Which is now close to DC, meaning a lot more samples per cycle. Too many, actually. It is just as difficult to read phase values near DC as near Nyquist. But, it is a lot easier (for me at least) to downsample with techniques that reduce any noise than to upsample and be possibly introducing error. Downsampling will also reduce the number of downstream calculations. Bonus.

The ultimate purpose is to read the phase and magnitude locally very accurately, which are both preserved (accounting for the sign flip).

So, this obviates the need for upsampling completely for this application. I still think this upsampling technique is really neat. Dan's full answer is going to take me a while to digest, and some of his solutions look superior to what I was planning on employing.

Thanks everyone, especially Dan.

Cedron Dawg
  • 7,560
  • 2
  • 9
  • 24
  • Nice analysis and should be consistent with the frequency domain plots especially is we plotted the phase as well. For accurate phase processing independent of the frequency location consider complex processing instead of real. This would maintain accurate phase at every sample, even at DC (basically eliminates all the image issues that plagued this approach). Not sure if that is helpful or applies to the specifics you are doing, and of course increases computations (one complex multiplier requires 4 real multipliers and two adds) but provides such beautiful, clean results. – Dan Boschen Jul 23 '20 at 12:13
  • 1
    @DanBoschen That was my goal for this: $$y[n]=e^{i(\omega n+\phi)}=\text{FIR}\left( \frac{e^{i(\omega n + \phi)} + e^{-i(\omega n + \phi)}}{2}\right)$$ The simple 3 tap FIR I was going to use works best at 4 samples per cycle, and terribly (meaning very noise sensitive) near Nyquist, so I was trying to move my frequency away from Nyquist. Now with having a lot of samples, I can find the derivative easily, so going to complex will be: $$ y = D^{d}(x) - i D^{d+1}(x) $$ Where $D$ is the normalized Differ approach in https://www.dsprelated.com/showarticle/896.php and $d$ the repeat count. – Cedron Dawg Jul 23 '20 at 13:09
  • If you look at Figure 6, I can set $a$ so that the frequency response peaks at my $\alpha$, so Differ works as a very smooth band pass filter. The smaller the $\alpha$ the narrower the peak. Then I can read the amplitude and phase at every sample. Yay! – Cedron Dawg Jul 23 '20 at 13:14
  • 1
    BTW, check out the denominators in (32) and (39), coincidence or conspiracy? ;-) And the law of cosines declares: $$ C^2 = A^2 + B^2 - 2AB \cos(\theta) = B^2\left[ 1 - 2\left(\frac{A}{B}\right) \cos(\theta) + \left(\frac{A}{B}\right)^2 \right] $$ Your $H(z)$ denominator represent the length of the side of a triangle with the other sides having lengths of $z$ and $\gamma$ subtended by an angle of $\alpha/2$: $$ z^2- 2\gamma\cos(\alpha/2)+ \gamma^2 $$ It's everywhere! Bad humor, but I'm tired. Thanks again for all your help in this. – Cedron Dawg Jul 23 '20 at 13:39