2

My objective is to create a Gardner loop to remove the timing offset present in my signal. To achieve best timing SNR, I downsample my signal to 2 samples per symbol. It is depicted below:

downsampled signal

where blue and red vertical lines indicate the symbol and the sample duration respectively.

Next, I plug my signal into Gardner loop, here is my code:

def Gardner_loop(time_support, sig):
# Initialization of the parameters
sample_rate = 1/time_support[1]
sps = int(sample_rate/f_symb) # samples per symbol
sig_n_1 = int(np.ceil(sps/2)) # x(n-1)
sig_n_2 = int(sps) # x(n-2)   
loop_filter = f_symb//20 # BW of the loop filter
loop_num = 100
error_symbol = np.zeros(loop_num)

error_list = []
sig_wo_time_err = []
step = sps

# Gardner loop
for i in range(1, loop_num):
    # Calculate the error
    error_symbol[i] = sig.real[sig_n_1 + step + int(error_symbol[i-1]*loop_filter)]\
    *(sig.real[sig_n_2 + step + int(error_symbol[i-1]*loop_filter)]\
    - sig.real[step+int(error_symbol[i-1]*loop_filter)])\

    + sig.imag[sig_n_1 + step + int(error_symbol[i-1]*loop_filter)]\
    *(sig.imag[sig_n_2 + step + int(error_symbol[i-1]*loop_filter)]\
    - sig.imag[step+int(error_symbol[i-1]*loop_filter)]) 

    # Make a step
    step+=sps

    # Save the error
    error_list.append(error_symbol[i])

    # Save the signal with reduced error
    sig_wo_time_err.append(sig.real[sig_n_2+step+int(error_symbol[i-1]*loop_filter)] \
    + 1j*sig.imag[sig_n_2+step+int(error_symbol[i-1]*loop_filter)])

error_to_plot = np.array(error_symbol)
# Plotting
plt.plot(np.arange(1, loop_num+1), error_to_plot)
plt.xlabel('loop number')
plt.ylabel('error')
return np.array(sig_wo_time_err)

the plot of the error is represented in the figure below:

error from Gardner

the constellation looks like this:

constellation

indicating that there is still the timing error.

My first specific question is: can somebody check if my implementation of Gardner loop is correct? Because I am not sure if the code I wrote does what is needed and I cannot find any software implementation of Gardner loop in internet.

In addition to that, I know that I need to interpolate the signal at specific points. These points are indicated by the error from Gardner loop. My understanding of the process in the following (see first figure): if the error is positive, then the sampling is too late and the interpolation must happen between 0 and the red line (sample duration). For negative error, interpolation happens at the last sample of the symbol (between red and blue lines). Then we use this interpolated point for further processing inside the loop and eventually drive the error to 0.

To implement this, in the code mentioned above, inside the for loop I use the function scipy.interpolate.interp1d to get the value of the signal at the time instant indicated by the error. However, with an interpolation length consisting of two points (0 and red line), I receive an error and cannot get the point in between (if needed, I can post my code as well).

So the last question is how do I implement the interpolation inside the Gardner loop.

Thank you!

Update:

The eye diagram of the signal before matched filtering and upsampling is depicted below:

eye-diagram of signal

The eye diagram after matched filtering (MF)

eye-diag after MF

After interpolation (upsampling with symbol_rate * 5) and without MF, the eye-diagram looks like this:

eye upsampled

After upsampling symbol_rate*5 and MF:

after MF and upsampling eye diag

Python
  • 133
  • 6
  • Could you please post the eye diagram of the received signal after the matched filter? Your signal (first plot in the question) does not look correct to me. Also: don't use only two samples to interpolate! You need to use at least 10-20 samples. – MBaz May 19 '21 at 13:50
  • I didn't read all this in detail yet but to pass along quickly the interpolation is typically done with polyphase filters as with that approach you never actually need to increase the sampling rate and it is all done at 2 samples per symbol. I summarize this in more detail at this post if you haven't come across this yet: https://dsp.stackexchange.com/questions/73645/design-a-timing-recovery-algorithm-with-predefined-samples-with-max-amplitude/73647#73647 – Dan Boschen May 19 '21 at 13:51
  • In particular this post which shows how it all fits together including the polyphase resampling for timing adjustment: https://dsp.stackexchange.com/questions/51810/symbol-timing-synchronization-using-a-high-sampling-rate/51812#51812 – Dan Boschen May 19 '21 at 13:55
  • @MBaz thanks for the reply! I have uploaded eye diagrams. What do you think? – Python May 19 '21 at 14:41
  • @DanBoschen thank you for the hint! With polyphase filter interpolation becomes much more intuitive! I will try to incorporate it to the Gardner loop now – Python May 19 '21 at 15:09
  • @Python So, this is a 4-PAM signal right? I'm not sure Gardner works for anything but BPSK and QPSK (I have only implemented it for QPSK myself). – MBaz May 19 '21 at 15:18
  • @MBaz no, this is actually QPSK – Python May 19 '21 at 15:19
  • @Python That's not the eye diagram of a QPSK signal. You're doing something wrong. If you draw the eye diagram of the I portion of the QPSK signal, you should see the signal taking two values when the eye is open. – MBaz May 19 '21 at 15:41
  • @MBaz I am little confused with your first comment, when you say that we have to use 10-20 samples for interpolation. I thought that the whole point of downsampling the signal to 2 samples/symbol was to ensure that there is a sample at the time calculated by Gardner. When we take 20 samples = 20/2 = 10 symbols, then how do we know where to interpolate? – Python May 19 '21 at 15:49
  • @MBaz OK, then I will try to fix my eye-diagram now – Python May 19 '21 at 15:50
  • @DanBoschen I am not sure how to implement the coefficients for FIR filters, but I found the function in python: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.resample_poly.html that resamples automatically. However, there are no coefficients. Would it still ensure the sampling at correct instances in the loop? – Python May 19 '21 at 15:53
  • I'm confused by your confusion :) so let me rephrase. You have samples $x(nT_s)$ of a signal $x(t)$. You want to interpolate the signal value at an arbitrary time $x(t_0)$. The interpolation process requires, in theory, all samples; in practice, you will need a few samples before $t_0$ and a few samples after. If you take too few samples, the interpolation will be inaccurate. – MBaz May 19 '21 at 15:56
  • @MBaz OK, but as far as I am concerned, out of these few samples before t_0 and after, we take only one that has the smallest timing error. Then we use this sample as a reference, so the error of the following symbol is calculated from that "reference sample". But how do we compute this "reference sample"? – Python May 19 '21 at 16:20

0 Answers0