0

Definitions

As far as I know the power of a signal can be expressed (using discrete time samples) via: $$ P_x=\frac{1}{N}\sum_{n=0}^{N-1}\big|x[n]\big|^2 $$ For details see this question.

However, it could also be expressed using the frequency domain representation (Parseval theorem): $$ P_x=\frac{1}{N^2}\sum_{n=0}^{N-1}\big|X_k\big|^2 $$ where $X_k$ is the $k$-th component of the DFT. The $N^2$ instead of $N$ in the denominator is due to the standard normalization of the DFT under which the transform isn't unitary.

My Question

This all works very well when trying to verify it numerically in python for two-sided transforms. However, when trying to use this on single-sided transforms it all breaks. What am I doing wrong? How can I apply this to single sided transforms? By the way, I encountered a similar issue in MATLAB, so I don't think numpy is to blame.

Below is an example code in which for some reason, everything seems fine for WGN, but when trying to sample a Gaussian signal, it fails.

import numpy as np
n = 10**7
tolerance = 2/np.sqrt(n)  # rate of convergence according to CLT
# WGN
rng = np.random.default_rng()
var = 4
x = np.sqrt(var)*rng.standard_normal(n)
time_domain_power = sum(x**2)/n
variance = np.var(x)
X = np.fft.rfft(x)/np.sqrt(n)
X_double_sided = np.fft.fft(x)/np.sqrt(n)
frequency_domain_power = sum(abs(X)**2)/len(X)
frequency_domain_power_double_sided = sum(abs(X_double_sided)**2)/len(X_double_sided)
assert abs(time_domain_power - variance) < tolerance
assert abs(frequency_domain_power - variance) < tolerance
assert abs(frequency_domain_power - time_domain_power) < tolerance
assert abs(frequency_domain_power_double_sided - time_domain_power) < tolerance

Gaussian

fs = 106 t = np.linspace(-0.5, 0.5, fs+1) n = len(t) gaussian_variance = 0.01 x = 1 / (np.sqrt(2 * np.pi * gaussian_variance)) * (np.exp(-t2 / (2 * gaussian_variance))) time_domain_power = sum(x2)/n X = np.fft.rfft(x)/np.sqrt(n) X_double_sided = np.fft.fft(x)/np.sqrt(n) frequency_domain_power = sum(abs(X)2)/len(X) frequency_domain_power_double_sided = sum(abs(X_double_sided)**2)/len(X_double_sided) assert abs(frequency_domain_power_double_sided - time_domain_power) < tolerance assert abs(frequency_domain_power - time_domain_power) < tolerance # this one fails

Any suggestions on how to fix my code?

Yair M
  • 305
  • 1
  • 7

1 Answers1

1

For a real input signal $x[n] \in \mathbb{R}$ the output of the FFT will have Hermitian symmetry, i.e. $X[k] = X^*[-k] = X^*[N-k]$

Most real FFT implementation just return the first $N/2+1$ elements of the FFT since the rest is redundant. To get the total power, you need to add the power of the redundant components as well, i.e.

$$P = |X[0]|^2 + 2*\sum_{k=1}^{N/2-1} |X[k]|^2 + |X[N/2]|^2 $$

Note that you don't double the power for DC and Nyquist since these are unique.

In Matlab that looks like this:

%% power of a Gaussian
% create signal
n = 1024;
x0 = gausswin(n,5);
fx0 = 1/sqrt(n)*fft(x0); % unitary FFY
fxSquared = fx0.*conj(fx0); % marnitude sqaured

%% Power in time and frequency pTime = sum(x0.^2); pFreq = sum(fxSquared);

%% one sided power M = 1; % MATLAB indexing offset. pOne = 2*sum(fxSquared(M+1:n/2-1))+fxSquared(M+0)+fxSquared(M+n/2);

%% print results fprintf('Double Sided error = %f\n',pFreq-pTime); fprintf('Single Sided error = %f\n',pFreq-pOne);

Hilmar
  • 44,604
  • 1
  • 32
  • 63
  • Thanks for the clarification! However, I'm still puzzled as to how did it work for the gaussian noise? Additionally, I looked at the documentation, and it seems that you cannot tell from the one sided transform, if the origination signal had even or odd number of samples (and whether you have positive Nyquist frequency in your spectrum), Thus I assume this will add some difference between the time and frequency domain values. Am I correct? – Yair M Jan 09 '23 at 18:37
  • For odd lengths, you don't add a term of $X[N/2] $ since $N/2$ is not an integer – Hilmar Jan 11 '23 at 00:58