0

I have read many of the posts here on the stack regarding sample rate conversion but have yet to find one that does not rely either upon asynchronous or synchronous clocks. It occurs to me that the simplest and best approach to sample rate conversion is to first convert the digital signal back to the analog domain using an DAC and then resample the resulting analog signal with whatever sample rate is needed. Correction: I should have said neither asynchronous nor synchronous. By that I mean complete independence of clocks.

Ken C
  • 1
  • 1
  • 1
    That's the same thing as having asynchronous clocks. The DAC has one clock and the ADC has another clock. This can be very well implemented totally digitally and is using chips like the AD1890. It's the polyphase thingie with linear (or a better polynomial) interpolation in between adjacent polyphase fractional delays. To deal with asynchronous clocks, there's a controller (as in control systems) that gently adjusts the sample-rate ratio between input and output. It keeps the output point at a fixed delay behind the input pointer. – robert bristow-johnson May 28 '23 at 23:13
  • Hello Ken C. I am not sure what your question is. Would you care to describe it a bit better? Are asking if this is possible, if this is a valid approach, if this is the best approach or something else? – ZaellixA May 28 '23 at 23:46
  • Rather than move to analog (a form of interpolation) you could interpolate to a rate that is a multiple of the target rate and then use decimation to the desired rate. – Moti May 29 '23 at 04:46
  • @Moti that's only an option for synchronous clocks, which on top of that need to be rationally related with small numerator. – Marcus Müller May 29 '23 at 16:09
  • @MarcusMüller, what you said is strictly true, but polyphase interpolation can be used in combination with some kinda continuous-time interpolation (I use linear interpolation) to efficiently convert with an arbitrary ratio. – robert bristow-johnson May 29 '23 at 17:00
  • definitely, @robertbristow-johnson! I mean, often, software radio folks tend to rationally resample, because that's so efficient esp. in polyphase resampler implementations, to a rate from which arbitrary resampling through "algebraic" interpolation becomes "good enough" and "cheap enough". We're not even restricting ourselves to linear interpolation; approximations to sin x/x are sometimes affordable. – Marcus Müller May 29 '23 at 17:07
  • 2
    Well, @MarcusMüller, it can be arbitrarily good. At least if you're willing to spend a little bit on memory for a big lookup table. But memory is cheap. For decades, I have been doing *extremely* high-quality audio resampling with a 512x polyphase filter with 32-taps each phase. I do two adjacent phases and then linearly interpolate between them. Arbitrary precision continuous-time delay. – robert bristow-johnson May 29 '23 at 17:35
  • @robertbristow-johnson yep, in practice that is similar what I do "by default" if I have enough CPU to spare but need arbitrary resampling; the GNU Radio Polyphase Arbitrary Resampler actually defaults to 32 individual "fractional delay" filters. We could have implemented these individual filters as fast-convolution FFT filters themselves, but it turns out that this doesn't pay for realistic lengths; when calculating many filters at once, keep memory access straight, so that not every memory access is a cache miss. – Marcus Müller May 29 '23 at 18:02
  • (obviously, that only matters on "application processor" style CPU computation. If you have a DSP with nice AGU and no need for multiple layers of caching, well, you win, anyways; a 512-component filter still inspires a bit of awe on my end.) – Marcus Müller May 29 '23 at 18:06
  • 2
    16K words? or 8K words if you modify the code just a little to take advantage of the even-symmetry. Even for a DSP, 8K isn't so bad. And, in creating the table I used MATLAB reshape() to put the 32 tap coefs for a single phase together in the same place in memory. Then it's a real quick 32-tap FIR filter (actually two of them and I linearly interpolate between the two). I did this on a SHArC a good quarter-century ago (now SHArCs have it built in). – robert bristow-johnson May 29 '23 at 18:27
  • 2
    and for 32 fractional delays, you might need to do better than linearly interpolate between adjacent phases. you might have to do a cubic spline. – robert bristow-johnson May 29 '23 at 18:33

1 Answers1

2

I will refer to the OP's resampling as the "analog approach" or "analog resampling", and the alternate all digital approach as "digital resampling". Analog resampling is indeed a valid sample rate conversion approach and it was the primary approach to sample rate conversion before the wider availability of low cost digital signal processing. The bottom line is that digital resampling is often lower cost, offers greater flexibility, and less opportunity for noise contamination when we already have the digital resources for other reasons; with digital resampling, all noise is predictable and traded with complexity to be driven down to any level up to the limits of the analog noise floor. Such an approach of converting to analog and resampling again still has merit when there is already a conversion to analog for other reasons. Analog resampling as the OP depicts is not a "bad" approach, but all digital resampling has compelling advantages. Ultimately either approach should be considered within the constraints of any specific implementation.

One interesting example application I am familiar with that demonstrates the significant advantages of an all digital resampling over the analog approach is timing recovery in a radio receiver. This is not resampling from one rate to another, but resampling from one time offset to another, and in that it uses all the same resampling techniques for either analog or digital resampling. I will first show the classic "analog" timing recovery approach that I believe is consistent with the OP's desire for not relying on any particular local sampling clock; through timing recovery, the waveform is resampled to any arbitrary rate, in this case the rate that would be synchronous with a remote transmitter. Yet, as I will show, the same functionality can be completely replaced with all digital resampling where the implementation is driven from a local clock, but still remains synchronized to a remote transmitter using an arbitrarily different clock.

Analog Resampling

The "analog approach" that is consistent (as I will show) with the OP's approach to resampling is the timing recovery implementation detailed in the graphic below. Prior to the use of digital resampling, timing recovery was done (and can still be done) similar to this where the sampling rate of the Analog to Digital Converter (ADC) was adjusted directly to correct for time offsets. This is a baseband sampled implementation where analog hardware prior to the ADC's shown does the job of translating the Radio Frequency (RF) signal from a higher carrier frequency to complex baseband (centered on DC). Similar operations can be done with other receiver architectures (IF sampling, digital down-conversion etc) but this provides a simple example for the timing recovery correction.

Analog Resampling

This is consistent with the approach described by the OP, in that the sampling clock itself is adjusted so that the sampling instants appear at the correct decision locations within the waveform for optimum demodulation. As will be made clear, the functional operation is identical to sampling the waveform at an arbitrary rate, converting that waveform back to analog, and then resampling it at another arbitrary rate (in this case arbitrary time offset but the sampling rate itself can also be different in that a changing offset with time is a frequency offset as well, and the correction will continuously update to correct such an offset in the sampling rate).

To explain the graphic, the baseband analog waveform (as two streams typically labeled "I" and "Q" for "In-phase" and "Quadrature") is sampled at multiple samples per symbol (a symbol is a unique transmission that encodes a series of bits; the more symbol choices that we have for a given waveform type, the more bits we can transmit at once: one example that will work with this receiver is 1024-QAM where there are 1024 symbol choices to send and thus 10 bits with each unique transmission). Given no other synchronization between transmitter and receiver, the initial sampling locations are arbitrary and thus multiple samples per symbol are required in order to resolve timing offsets (for the same reason there will be frequency offsets from the actual carrier frequency transmitted, which are resolved with "carrier recovery" implementations elsewhere in the receiver- this is not the same as the sampling rate offset in the ADC clock mentioned earlier). Not critical to timing adjustment, but to explain the other receiver operations shown are filtering "Filt", and gain adjustment "AGC", equalization "Equal", and calibration "Cal" all for the purpose of selecting and correcting the waveform of interest for demodulation. A Matched Filter provides for optimum Signal-to-Noise Ratio (SNR) in the presence of white noise, and then finally the down-arrow represents a down-sampling to one sample per symbol which is the sample that we desire to be at the exact sampling instant where we have the best chance in correctly deciding what unique symbol was transmitted.

A Timing Error Detector (TED) creates an error signal from samples either before or after the Matched Filter (depending on which TED is used, some work better before and some after). As in any typical control loop, the error signal is integrated and adjusted in a loop filter which in this case puts creates an analog control voltage through the use of a Digital to Analog Converter (DAC) which is low-pass filtered (LPF) and applied to a Voltage Controlled Crystal Oscillator (VCXO) adjusting the sampling rate dynamically such that the error signal is driven to zero. (If the sampling rate is lagging, the VCO frequency is temporarily reduced; if the sampling rate is leading, the VCO frequency is temporarily increased all autonomously under the control of the loop). The best loop filter will "float" to whatever value is necessary to drive the TED output closest to zero (on average) and continuously change to keep it there.

What we see is this is equivalent to the OP's approach; in that there is no clock that need be related to any other clock in the local receiver given the VCXO adjusts to arbitrary sampling rates and does not depend on any other fixed sampling rate. Functionally this is identical to the OP's approach of samping the received waveform with a local clock at an arbitrary timing offset, and equivalently converting that sampled waveform back to analog and resampling again with another ADC at the correct timing offset. The correction loop and our ability to allow for a convergence time simplifies this to sampling once while still using the same "analog" approach for resampling to provide the timing offset correction.

Digital Resampling

With digital resampling, the DAC and LPF is eliminated and replaced with the "Interp/Resample" block shown (as interpolation and resampling) providing an all digital resampling as an equivalent dynamic adjustment for time offset. This allows for the ADC to be driven from a fixed master clock, which can conveniently be the same clock used for other operations across the local hardware. If the digital processing resources are already in the hardware for other reasons, the elimination of the DAC and related analog circuitry is an attractive consideration. I will refer to another post providing the implementation detail for the interpolation and resampling for consideration to the replacement of the DAC and increase in flexibility that digital resampling brings.

Digital Resampling

Functionally, the interpolation and resampling block is equivalent to the diagram below, for each "I" and "Q" path. As done in digital resampling, the waveform is upsampled by $N$ by inserting $N-1$ zeros in between each sample, and then a subsequent interpolation FIR filter does the job of bringing those zeros up to the correct interpolated value. A shift register can then shift the samples forward or backward in time as needed prior to down-sampling by $N$ (selecting every Nth sample) such that the waveform is properly resampled. $N$ is chosen based on our requirement of allowable timing offset. The "timing offset" is the control value coming from the Loop Filter which will float to any value necessary to drive the timing error out of the TED closest to zero (on average). What is beautiful is that this same functionality is achieved without ever having to increase the sampling rate! There are several techniques for actual all digital time or delay adjustment. Farrow Filters are one digital timing adjustment technique that allow for continuous time adjustment, and polyphase filter banks are another that would be functionally equivalent to this block diagram shown (without increasing the sampling rate!).

interpolation and resampling

The equivalent interpolation and resampling implementation as provided by the block diagram above but with using polyphase filters is further detailed at this post and shown in the diagram below. This uses a bank of filters where the number of filters needed is based on the required time resolution (or can be reduced by combining with subsequent polynomial interpolation using adjacent filter outputs). A polyphase interpolator works by creating $N$ delayed copies of the original waveform, with the result of $N-1$ additional copies of the waveform for each of the time slots between the original input samples. Each filter is approximately an "all-pass" delay line. To create a digital interpolator to a higher sampling rate, a commutator at the output would go to each filter output in turn creating the higher sampled output. For use in timing recovery, only one output is needed at any given time, and thus the whole implementation reduced to a single filter structure with coefficients that can be loaded from memory as needed for the desired delay. Typical parameters are for the FIR filter itself to be 6 or 7 taps (the specific number of taps is driven from a distortion requirement), and the memory requirement is based on the required time precision. This is often a valuable alternative to eliminating the DAC and fixed analog low-pass filter, simplifying the sampling clock (or using the master clock used elsewhere in the local hardware) and gaining further flexibility that all digital implementations bring. This also avoids degrading SNR through the introduction of analog noise sources. Further, for the cases when the TED can be determined from the output of the matched filter, the interpolation filter and matched filter can be combined such that the matched filter coefficients are used to create the interpolation filter!

Polyphase Filter bank

Note that this will converge to any arbitrary sampling rate with a "quantization in time" based on the closest delay increment needed to achieve the target rate. The time quantization will be a phase noise or jitter that is bound by the number of filter banks used.

There are many cases where either sampling approach can be used but with compelling advantages to go to an all digital approach with no loss in functionality including the OP's desire for an independently sampled signal. I hope this detailed example in timing recovery provides further insight into more generalized considerations one may make when making such architectural choices. That said, there can be advantages to the OP's analog approach when the digital resources are not otherwise there and/or when there are other reasons for such a conversion back to analog but it is not in most cases "simplest and best".

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • The conversion to an analog signal is like interpolation to an "infinity sample rate". This means that taking into consideration properly "interpolation noise" is needed as in any digital conversion. The specific application is important. – Moti May 30 '23 at 19:04
  • @Moti I agree, and nice how it is similarly predictable. There are a lot of parallels such as the useful approximation of quantization noise as a uniform white noise with a total power density (or variance) derived from $q^2/12$ with $q$ as the quant level. We can similarly approximate the quantization in time of the interpolation noise, as the time error for an independent and incommensurate waveform is reasonably approximated as a uniform distribution in time over the time range given by the precision we set. Thus we can choose the precision based on the allowable additive phase noise. – Dan Boschen May 30 '23 at 23:23
  • Meaning given any SNR or distortion requirement, we can ensure enough precision to keep our added distortion sufficiently below that of the distortion or noise in the sampled waveform itself (just as we do when picking how many bits we need when converting from analog to digital). I can't think of a case where we can't get there with either approach (analog resampling or digital resampling). – Dan Boschen May 30 '23 at 23:33
  • Since digital resampling is more predictable, it is preferred. Shannon is alive - meaning the limit is the SNR (distortion could be factored in as colored noise). What is important to understand is that quantization refers only to the signal and not the digital filters, particularly for FIR, where the "quantization" of the coefficients is effecting the filter shape. – Moti May 31 '23 at 04:21
  • I am at a disadvantage here. I was brought up in an analog word and I tend to think along those lines, so forgive me if I am unable to follow all of details pertaining to strictly digital domain. – Ken C May 31 '23 at 14:50
  • @KenC you are actually at an advantage as you start to learn about more and more digital techniques in that you will have a cross-domain perspective. I think the take-away at this point is that there are very compelling advantages and (I believe) can always meet or exceed the performance of the equivalent analog approach at the cost of complexity. That cost continues to get lower and lower as technology advances for digital techniques. This is hopefully motivation to learn more about it. – Dan Boschen May 31 '23 at 23:57
  • And I can help you with that! Go to https://ieeeboston.org/2022-courses/. The “DSP for Wireless Communications” course starting in July goes through all these resampling techniques starting from the “analog perspective” – Dan Boschen Jun 01 '23 at 00:01