Goal:
I'm trying to model a waveform in the time-domain for pattern recognition.
My plan:
- Convert signal to frequency domain using FFT
- Reduce harmonics to hopefully isolate residual data, and make it zero (low pass filter)
- Use IFFT to find the deterministic part of the waveform.
My problem:
Although the modelling in shape is accurate, the amplitude of the waveform seems to be 'compressed'.
My question:
What is the reason for this and are there any techniques to fix the amplitude?
Code:
# Perform Fourier transform using scipy
from scipy import fftpack
from scipy.fft import fft, fftfreq
x = x[:1400]
SAMPLE_RATE = 100 # number of samples obtained in one second - 100Hz
DURATION = 14
Number of samples in normalized_tone
N = SAMPLE_RATE * DURATION
yf = fft(x)
xf = fftfreq(N, 1 / SAMPLE_RATE)
plt.plot(xf, np.abs(yf))
plt.show()
for index,val in enumerate(yf[:1000],1):
if (abs(val) > 1000):
print(index)
ynew = yf # copy
ynew[1350:] = 0
print(ynew)
y = np.fft.ifft(yf)
plt.plot(y)
plt.plot(x)
plt.legend(['raw signal', 'filtered signal'])
plt.show(block=False)
enter preformatted text here
Results:
