I'm looking into different ways to get the Power Spectral Density (PSD) (while toying around with python) of a discrete signal/time series, and I'm struggling to understand why I get (very) different outcomes using different approaches. In particular, when computing the PSD from the FFT of a signal - using the squared absolute values - the resulting scale is several orders of magnitude greater than what I get from alternatives like Welch's method or using a Periodogram:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import welch, periodogram
input
t = np.arange(5999.) / 300
dt = t[1] - t[0]
y = np.sin(np.pi * t) + 2 * np.sin(np.pi * 2 * t)
N = len(y)
fs = 1.0 / dt
FFT
n = 2*(N - 1).bit_length()
Y = np.fft.fft(y, n)
Ypos = Y[0:int(n / 2 - 1)]
ff = (fs / n) np.arange(0, n / 2 - 1, 1)
Yre = np.real(Ypos * np.conj(Ypos) / n)
PSD
wf, wPSD = welch(y, fs, nperseg=N)
pf, pPSD = periodogram(y, fs, scaling='density')
fPSD = (np.abs(Yre) ** 2) / (N / fs)
plots
figy, axy = plt.subplots()
axy.set_title('y(t)')
axy.set_xlabel('time(s)')
axy.plot(t, y)
figS, axS = plt.subplots(3, 1, constrained_layout=True)
figS.suptitle('$S_{yy}(f)$')
axS[0].set_title('welch')
axS[1].set_title('periodogram')
axS[2].set_title('FFT')
axS[1].sharex(axS[0])
axS[2].sharex(axS[1])
axS[0].plot(wf, wPSD)
axS[1].plot(pf, pPSD)
axS[2].plot(ff, fPSD)
axS[0].set_xlim(0, 3)
axS[1].set_xlim(0, 3)
axS[2].set_xlim(0, 3)
axS[2].set_xlabel('f (Hz))')
plt.show()
which plots the following spectra:
I understand that Welch's method and the Periodogram are simply estimates of the signal's PSD, and some discrepancy is to be expected. But how exactly should I scale the values when computing the PSD directly from the FFT output, in order to achieve somewhat comparable results?
On the other hand, one would expect any power calculation to require the input of any time information, contrary to the FFT which can be scaled to fit different sampling rates.
