-1

I know that with three steps:

  1. complex demodulation
  2. lowpass filtering
  3. decimation

we are able to select frequency contents in a band with reduced sampling rate. But this method only allows frequency content that lies within half the decimated sampling frequency.

In a paper (http://pubs.drdc.gc.ca/BASIS/pcandid/www/engpub/DDW?W%3DSYSNUM=504198),

it suggests an algorithm that solves this problem, but I don't know the mathematics behind the algorithm, can anyone help.

For reference convenience, I take a snapshot of the algorithm description. Thank you. enter image description here

ecook
  • 379
  • 1
  • 8
  • 2
    Please don't post camera pictures of text. The optimal way would actually be inserting quoted text (there's a button for quotation formatting in the editor), but even screenshots made with the built-in screenshot tool of your operating system would be better. – mmmm Jul 07 '21 at 10:24
  • I am sorry for this matter. Actually I edit the post on my cellphone. I will do as required when I can have access to my computer. – ecook Jul 08 '21 at 08:40
  • I believe the opening premise that complex demodulation won’t preserve the entire spectrum is incorrect. – Dan Boschen Jul 10 '21 at 11:44

2 Answers2

1

I don't believe the second statement given in the copied text and repeated below is accurate, which gives me less confidence that the developer of the new algorithm had a complete understanding of signal processing:

Simple decimation is not good enough since the frequency band of interest may lie above the frequency range allowed by the sampling frequency.

And then the article further suggests that a special new technique that didn't otherwise exist is needed in order to reduce the sampling frequency while retaining the spectral information in a band of interest that lies at frequencies higher than half the new sampling frequency.

No such special new technique is needed. Simple decimation is the combination of a frequency selective filter and down-sampling (selecting every Dth sample for a decimate by D and throwing away the rest). This does not preclude the use of bandpass filters as the filtering solution which for a real signal would retain all spectral information in any band extending over half the sampling rate. To preserve all spectral information in any band extending over the full sampling rate, a complex signal is needed.

The article describes using complex demodulation, meaning shifting a passband of interest to baseband where a low pass filter could be used prior to decimation. This process is referred to as homodyning the signal by multiplying it by a complex tone at the center frequency of the signal such as to move the signal in frequency to baseband. Instead, the filter coefficients themselves in the low pass filter can be homodyned with the same process, resulting in transforming the low pass filter to a band pass filter (moving the filter to the signal instead of moving the signal to the filter). The decimation process itself will then directly create the spectrum at baseband at the lower sampling rate.

Also to note regarding the unique spectrum that can be preserved when a complex signal is used. Complex demodulation preserves the entire frequency range from DC to the sampling rate, or equivalently from $-F_s/2$ to $+F_s/2$ (Where $F_s$ is the sampling rate). Any real signal will be complex conjugate symmetric in frequency, and thus the negative half spectrum ($-F_s/2$ to $0$) is equivalent to the positive half spectrum ($0$ to $+F_s/2$) with a conjugation of the phase for that case, so provides no further information; therefore the unique spectrum for any real signal can only exist in a frequency band that extends over half the sampling rate. We have no such restriction when we work with a complex signal, and therefore the entire frequency range from $-F_s/2$ to $+F_s/2$ (or any band over $F_s$) would be unique. By multiplying the real passband signal with a complex Local Oscillator (done with a sine and cosine and two multipliers), we get the complex output representing the complex down-converted spectrum. This process too can be transformed to quadrature bandpass filters which would provide the Hilbert transform to the real passband signal, converting it to a complex passband signal (when the interest is preserving the full band over the decimated sampling rate).

This is explained in more detail at this post.

So if there was interest in preserving a unique spectrum that extends over a frequency span equal to the new decimated sampling rate, the common techniques are to use complex demodulation followed by two low pass filters (on the I and Q datapaths for the complex output signal) and then down-sampling by selecting every Dth sample, or quadrature bandpass filters followed by the same down-sampling.

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • I am sorry. But I think I have not described my question clearly. Referring to the figure, it says that the shifting center frequency is limited to multiples of $\frac{F_{sh}}{M_f}$? Why is such restriction? Another question is why the decimation is implemented in the freqency domain by inverse FFT of $M/D$ bins of interest? – ecook Jul 08 '21 at 08:36
  • @ecook You can decimate in time by taking every Dth sample, or decimate in frequency by selecting the lowest frequency samples (as if the sampling rate was lower)---it is the same result but I think the approach is flawed in that given complex demodulation is used, the positive and negative frequency components of the FFT should be included (the negative frequency components would be the upper bins of the FFT). But then why not just select every Dth sample in time which doesn't even need to IFFT computation? I don't have a lot of faith in that article for reasons I added. – Dan Boschen Jul 08 '21 at 10:29
  • From the OP's copied text , it refers to "overlap-truncate" which is probably either overlap and save or overlap and add. So it sounds like they are using FFT's to implement the filter - this may have some limitations on the design i.e. available decimation factors and center frequencies, I'd have to read the whole article to be sure. FFT implementations of filtering become more efficient when longer filters are involved. If your are also doing decimation, polyphase implementations may become more efficient. – David Jul 08 '21 at 13:13
  • I have updated the figure. This new figure contains the diagram of the algorithm. From the figure, it can be seen that it is implemented in the frequency domain.@Dan Boschen @David – ecook Jul 09 '21 at 06:40
  • @ecook Thanks I agree with David that this would be much more efficient using polyphase decimation and still have my initial impression with no intended disrespect to the authors that their understanding of signal processing at the time of creating this may have been limited resulting in a much more complicated algorithm to achieve a result (or perhaps it is me that is limited and I just don't see the necessity of all this!- I'm always open to that). That said, do you have a specific signal processing question about a particular detail in their algorithm that I didn't yet answer? – Dan Boschen Jul 09 '21 at 12:31
  • I think it's my fault of not describing my question clearly, especially of not having included the implementation diagram in the figure when I submitted my post. Referring back to the question, I am still not clear with the mathematics of the algorithm, and the details such as limitations on decimation factor and center frequency, as pointed out by David. Hope you can help. – ecook Jul 10 '21 at 03:14
  • @ecook your first statements in your question are incorrect in that the three steps you give will indeed give you a spectrum over the full sampling rate. Is that really your goal for understanding the presented algorithm? (as I think you would then prefer to understand how your first three steps would give you the full sampling rate?) – Dan Boschen Jul 10 '21 at 11:18
  • Actually, what I want to ask is the mathematics of the algorithm in the paper. – ecook Jul 10 '21 at 11:25
  • Fair enough— can you clarify where you are stuck, or at least the first thing (just so we don’t have to rewrite the paper) – Dan Boschen Jul 10 '21 at 11:42
  • Sorry for not explaining the question clearly. Do you suggest I ask it in a new post? – ecook Jul 10 '21 at 12:05
  • I suggest that you aak similar to your comments in that there really is no motivation to understand the typical way to do this but the math in this algorithm irregardless if the premise of the algorithm is flawed. You should delete your incorrect intro as that isn’t the point of your question. Also very helpful to specify exactly what math you are having trouble with which could then be answered with a short concise answer. If you have multiple questions that may be best as separate posts. – Dan Boschen Jul 10 '21 at 12:24
0

From the diagram you added there's a couple of things to note. First it looks like they are already starting with complex (basebanded) data. Second, they are doing the vernier basebanding (frequency shifting) in the frequency domain. Doing this when starting with overlapped FFTs is going to put some constraints on the frequencies you can shift to. It would take some time and effort to analyze.

David
  • 2,861
  • 11
  • 13
  • Can you give some references, links,or papers that I can follow? The reference (Mohammed, a high-resolution spectral analysis technique, DREA Memorandum 83/D) given by the paper can not be accessed. – ecook Jul 13 '21 at 00:55
  • I don't really have any references. Frequency shifting (basebanding) is easily done by multiplying the time series by a complex exponential. You basically have to equate that to the overlapped FFTs and take into account the overlapped portion. – David Jul 13 '21 at 12:33
  • You may want to do a search for the term "Zoom FFT" - there are several papers that use that terminology. It is really the same thing - frequency shifting, low-pass filtering, decimation and FFTs. Most of these were in the late 70's and early 80's and the idea of polyphase implementations wasn't quite as popular at that point. – David Jul 13 '21 at 13:36
  • Do you mean that if polyphase implementation is used, we could obtain the same results but with less computation requirement? Or is it a better way to use polyphase implementation than the method proposed in the paper? – ecook Jul 13 '21 at 14:55
  • A polyphase implementation could be used to achieve the same results. Whether it is more efficient depends on a lot of things: HW arch, filter specifications etc. The polyphase approach could allow pretty much arbitrary decimation factors and arbitrary frequency shifts. The FFT approach may be faster for certain arrangements. – David Jul 13 '21 at 15:32