My objective is to create a Gardner loop to remove the timing offset present in my signal. To achieve best timing SNR, I downsample my signal to 2 samples per symbol. It is depicted below:
where blue and red vertical lines indicate the symbol and the sample duration respectively.
Next, I plug my signal into Gardner loop, here is my code:
def Gardner_loop(time_support, sig):
# Initialization of the parameters
sample_rate = 1/time_support[1]
sps = int(sample_rate/f_symb) # samples per symbol
sig_n_1 = int(np.ceil(sps/2)) # x(n-1)
sig_n_2 = int(sps) # x(n-2)
loop_filter = f_symb//20 # BW of the loop filter
loop_num = 100
error_symbol = np.zeros(loop_num)
error_list = []
sig_wo_time_err = []
step = sps
# Gardner loop
for i in range(1, loop_num):
# Calculate the error
error_symbol[i] = sig.real[sig_n_1 + step + int(error_symbol[i-1]*loop_filter)]\
*(sig.real[sig_n_2 + step + int(error_symbol[i-1]*loop_filter)]\
- sig.real[step+int(error_symbol[i-1]*loop_filter)])\
+ sig.imag[sig_n_1 + step + int(error_symbol[i-1]*loop_filter)]\
*(sig.imag[sig_n_2 + step + int(error_symbol[i-1]*loop_filter)]\
- sig.imag[step+int(error_symbol[i-1]*loop_filter)])
# Make a step
step+=sps
# Save the error
error_list.append(error_symbol[i])
# Save the signal with reduced error
sig_wo_time_err.append(sig.real[sig_n_2+step+int(error_symbol[i-1]*loop_filter)] \
+ 1j*sig.imag[sig_n_2+step+int(error_symbol[i-1]*loop_filter)])
error_to_plot = np.array(error_symbol)
# Plotting
plt.plot(np.arange(1, loop_num+1), error_to_plot)
plt.xlabel('loop number')
plt.ylabel('error')
return np.array(sig_wo_time_err)
the plot of the error is represented in the figure below:
the constellation looks like this:
indicating that there is still the timing error.
My first specific question is: can somebody check if my implementation of Gardner loop is correct? Because I am not sure if the code I wrote does what is needed and I cannot find any software implementation of Gardner loop in internet.
In addition to that, I know that I need to interpolate the signal at specific points. These points are indicated by the error from Gardner loop. My understanding of the process in the following (see first figure): if the error is positive, then the sampling is too late and the interpolation must happen between 0 and the red line (sample duration). For negative error, interpolation happens at the last sample of the symbol (between red and blue lines). Then we use this interpolated point for further processing inside the loop and eventually drive the error to 0.
To implement this, in the code mentioned above, inside the for loop I use the function scipy.interpolate.interp1d to get the value of the signal at the time instant indicated by the error. However, with an interpolation length consisting of two points (0 and red line), I receive an error and cannot get the point in between (if needed, I can post my code as well).
So the last question is how do I implement the interpolation inside the Gardner loop.
Thank you!
Update:
The eye diagram of the signal before matched filtering and upsampling is depicted below:
The eye diagram after matched filtering (MF)
After interpolation (upsampling with symbol_rate * 5) and without MF, the eye-diagram looks like this:
After upsampling symbol_rate*5 and MF:






