4

One of the implementation of Bluestein algorithm as show in the below link

Bluestein's Algorithm

[conj(W), zeros(1, L-2*N+1), conj(W(N:-1:2)) ]

padding the zeros at the middle of the sequence, eventually gave the correct results.

But padding zeros just at the end does not give the correct results.

Question: When to pad the zeros at the middle of the sequence and when at the end?

jomegaA
  • 659
  • 3
  • 16

4 Answers4

4

When working with the DFT and IFFT we can zero pad the signal, which serves to interpolate new samples in the other domain. We will often see this applied with padding at the end of the sequence or alternatively in the middle of the sequence as referenced in the application linked by the OP. Below I answer the question as to why we would consider padding in the center of the sequence and the implications of doing that. Specific to how that is used in an implementation of Bluestein’s algorithm I have detailed in another post here.

In general when working with the Discrete Fourier Transform (and inverse) , if we don't pad in the middle of a sequence, a linear phase will be introduced in the resulting transform. In applications where our only concern is with the interpolated magnitude of the result, this would be of no consequence.

This answer explains the general considerations and motivations for when we would want to insert zeros in the middle of a sequence in either the time or frequency domain rather than zero padding at the end. Padding a sequence with zeros in one domain in either location (middle or end), interpolates samples in the other domain. This interpolation can be accomplished without introducing additional phase distortion in the other domain by padding in the proper "middle" of the sequence rather than at the end. In many cases this phase distortion is inconsequential since it is a linear phase: A linear phase in the frequency domain is a time shift or delay in the time domain. Similarly, a linear phase in the time domain is a frequency shift or translation in the frequency domain. Alternatively we can zero pad at the end of the sequence and then correct for the linear phase error in the result which may be more convenient that the approaches outlined here.

Proper "Padding in the Center"

Proper symmetry must be maintained when padding in the center, such that we maintain the same number of "positive" and "negative" domain samples.

Padding in the "true center" for an odd sequence is done by placing the zeros in between $N/2+1$ samples at the beginning and then the remaining $N/2$ samples at the end, as in:

$$[x_0, x_1, x_2, x_3, x_4]$$

$$[x_0, x_1, x_2, 0, 0, x_3, x_4]$$

As shown in the link jomega shared in the comments, the location of the zero padding is clear when we consider the alternate and equivalent positive and negative indexing as:

Values: $[x_0, x_1, x_2, x_3, x_4]$

Indexes: $[0, 1, 2, 3, 4]$

Is the same as the following given the periodicity property of the DFT:

Values: $[x_0, x_1, x_2, x_3, x_4]$

Indexes: $[0, 1, 2, -2, -1]$

Thus a zero insert after index 2 can increase the time duration in both the positive and negative direction which serves to not introduce any additional delay:

Values: $[x_0, x_1, x_2, 0, 0, x_3, x_4]$

Indexes: $[0, 1, 2, 3, 4, -4, -3, -2, -1]$

For even sequences, the center bin is shared between the "positive" and "negative" domain, and therefore must be split in complex conjugate halves if not zero. To pad in the center for even sequences, we must split the bin located at $n=N/2$ (for $n=0\ldots N-1$) into complex conjugate halves. For example, with $N=5$, the sample $x_3$ is the shared sample that is right on the boundary between what would be considered the positive domain samples and negative domain samples, and the zero padding would be done as follows:

$$[x_0, x_1, x_2, x_3, x_4, x_5]$$

$$[x_0, x_1, x_2, x_3/2, 0, 0, (x_3/2)^*, x_4, x_5]$$

Where $(x_3/2)^*$ represents the complex conjugate of $x_3/2$.

Related questions with additional details related to the proper splitting are here and here.

Intuition for Padding in the Time Domain

Due to the reciprocity in the DFT, similar considerations in the frequency domain would apply in the time domain by swapping maximum time with maximum frequency. The more detailed frequency domain explanation is given to be more intuitive for anyone familiar with sampling. However, in general for either domain, padding in the center of a sequence will increase the "length" in either domain without distorting the original samples. ("Length" implying the sampling frequency in the frequency domain, or the time duration in the time domain). Below is a simple time domain example, followed by a more detailed frequency domain explanation where we see the same property holds and why.

Consider the time domain sequence given by:

$$x[n] = [1, -1, 1, 1, -1]$$

The DFT of this sequence is:

$$X[k] = [ 1, -1.236, 3.236, 3.236, -1.236]$$

The Discrete Time Fourier Transform (which the DFT samples lie on) is plotted below together with the selection given as $X[k]$ above.

DTFT

Note that the result $X[k]$ for the particular time domain sequence used is real (due to the symmetry I chose in the sequence and that the samples are real). Likewise the DTFT plotted above is completely real.

Only when we pad zeros in the proper center of the sequence, will the result continue to be real and fall exactly on the plot above. Padding zeros to the end or off-center will result in the samples from the DTFT having the same magnitude as above but with introduced phase offsets (so introduces a linear phase distortion; linear as the phase introduced is proportional to the frequency index).

Intuition for Padding in the Frequency Domain

The N-sample DFT typically has the bins extending from bin 0 to bin N-1 corresponding to the frequencies of DC up to nearly the sampling rate (1 bin less than what would correspond to the sampling rate. Due to periodicity in the DFT, these are equivalent to the DFT bins corresponding to the frequencies in the first Nyquist zone ($|f|< f_s/2$ where $f_s$ is the sampling rate) and as mapped with the fftshift function in MATLAB/Octave and Python scipy.signal.

If we wish to increase the sampling rate associated with the DFT samples while maintaining all signals in the first Nyquist zone, we should then pad the frequency domain DFT result in the center.

This may be made clearer by observing the spectrum plots below showing the representation of a continuous-time (CT) sinusoid in the DFT, along with the spectrum of the sampling process and the resulting spectrum of the discrete-time (DT) sinusoid. If these graphics aren't completely clear, I add more detail on their background at the end of this post. We see how the DFT covers the frequency range from DC to the sampling rate, but also the periodicity in the DFT result such that if we moved the shaded region to the left or right, we could still convey all the information contained about the original signal: the signal component at the upper end of the DFT equivalently represents the "negative frequency" components in the signal.

Spectrum for lower sampling rate case

If we only wanted to indirectly create a new DFT that would be equivalent to sampling the original signal at a higher sampling frequency, then we should zero pad in the middle of the DFT as demonstrated in the graphic below.

Spectrum for higher sampling rate case

Side note: in the actual DFT outputs for the DFT of a sinusoidal tone we would also see many additional non-zero samples as "spectral leakage" except for the convenient case that the sampling rate is an integer multiple of the frequency of the sinusoidal tone.

If the above plots are confusing, please see this post for more background information. Basically the sampling of a signal is the process of multiplying a signal with periodic impulses in time. The Fourier Transform of periodic impulses in time is periodic impulses in frequency (which is what we see in the graphics above). Multiplication in the time domain is identical to convolution in frequency domain. The spectrum for the discrete-time (DT) sinusoid is the result of convolving the CT Sinusoid with the spectrum of the sampling process.

For further examples of this specific to the Bluestein algorithm in this question and the DFT used for efficient circular convolution, and how the zero padding in the middle of a sequence can be done, please see this other post.

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • "increase the sampling rate ... while maintaining all signals in the first Nyquist zone ... pad the frequency domain" but the algorithm pads the time-domain signal. You mention 'reciprocity' but it's unclear how, I guess as you imply, increasing the sampling rate of the spectrum, has any special effect on the resulting convolution. If we shift both the padding and whatever's convolved with by the same amount, the unpadded output is identical, so I'd say the answer's in why we center-pad relative to what we're convolving with (something chirped). – OverLordGoldDragon Mar 30 '22 at 01:03
  • 1
    Padding at the end of a time domain sequence, for example, will introduce a linear phase vs frequency to the prior result in the frequency domain- but padding in the center will leave the original samples unchanged (and interpolate new samples). So this is answering the question stated "Question: When to pad the zeros at the middle of the sequence and when at the end?" in more general terms for all applications not just the one introduced. I'll add a simple example. – Dan Boschen Mar 30 '22 at 01:07
  • I also wonder the reason how we are sure that y(1:N) are the correct sequence instead of any other sequence. – jomegaA Mar 30 '22 at 01:47
  • I meant in the implementation of Bluestein's algorithm... – jomegaA Mar 30 '22 at 01:54
  • @jomegaA I see-- you are asking specifically about the chirpZ algorithm and not in general terms when/where to pad? – Dan Boschen Mar 30 '22 at 02:08
  • I took this as example and try to understand the zero padding at the middle. I have seen many other demonstrations about zero padding in time domain would result in nice looking spectrum but without any change to its spectral content. – jomegaA Mar 30 '22 at 04:22
  • But what confused me are the indices and the difference between padding at the middle and at the end.. – jomegaA Mar 30 '22 at 04:23
  • @jomegaA Read HotPaw's answer as he bottom-lines it nicely, then read my answer again to further see why that is the case. It's all about the phase... – Dan Boschen Mar 30 '22 at 04:42
  • It'd help if OP's title were clearer but the question's body does reference an algorithm where the padding is used exclusively for convolution. I think if this is to be the accepted answer then OP should either unreference the algorithm or the answer should clarify that it doesn't deal with convolution. – OverLordGoldDragon Mar 30 '22 at 16:17
  • The question came after investigating the algorithm. 1) why is zero padding at the center of the sequence is preferred instead of padding at the end. 2) How can they be sure that y(1:N) contains the required signal after the fast convolution. – jomegaA Mar 30 '22 at 16:55
  • 1
    Unsure what you refer to, the entire convolution, not just unpadded portion, is exactly identical, just circularly shifted, unshiftable via np.roll. I'll add an example later – OverLordGoldDragon Mar 30 '22 at 17:29
  • 1
    Why conj(fft(b))? That's not convolution between a and b nor is it what the algorithm does. If you meant fft(conj(b)), then we're just convolving a with c where c=conj(b) and my point stands – OverLordGoldDragon Mar 30 '22 at 21:06
  • 1
    @OverLordGoldDragon oh yes sorry for that distraction- I confused circular correlation w convolution (it is correlation when conjugated)— but regardless my comment holds for that case as well—- for the case of circular correlation using ifft(fft(a) fft(b)) you get very different results if you change where the zero insert is for the reasons I give. – Dan Boschen Mar 30 '22 at 21:13
  • 1
    The example you've shown isn't "padding", it's changing the signal entirely by permuting its parts – OverLordGoldDragon Mar 30 '22 at 21:29
  • 1
    @OverLordGoldDragon what do you call "padding zeros in the middle of the sequence"? – Dan Boschen Mar 30 '22 at 21:33
  • 1
    Anything that changes the effective sequence is no longer "padding", it's an active transformation... I've never seen this called "padding" in context of convolution. If [x0, z, z, x1] is what's meant, it's obviously different from [x0, x1, z, z], but the explanation there is as simple as "the sequence has changed", there's no need to invoke FFT upsampling (and it's not even just upsampling anymore, it's transforming + upsampling). If the algorithm does x0 z z x1 then the real question is why is such a swapping being made. – OverLordGoldDragon Mar 30 '22 at 21:37
  • 1
    The question was how changing padding from the middle to the end could result in a different answer. We do often pad in the middle for reasons I gave- in a way that doesn't change the time frequency relationship of the signal. – Dan Boschen Mar 30 '22 at 21:42
  • 1
    I'm afraid I don't follow. "In applications where are only concerned with the magnitude of the result, this would be of no consequence." but x0 x1 -> x1 x0 completely changes the result of convolution. To get the equivalent in terms of right padding, that's just x1 x0 z z. Else, again, this "padding" is doing more than simply extend the support of convolution. So I also disagree with "padding in the center of a sequence will increase the "length" in either domain without distorting the original samples". The answer's in what this algo's trying to do, and that I have no clue about. – OverLordGoldDragon Mar 30 '22 at 22:04
  • 1
    TLDR the question is "why swap the halves of input" rather than "why center pad". – OverLordGoldDragon Mar 30 '22 at 22:04
  • https://ccrma.stanford.edu/~jos/mdft/Zero_Padding.html

    Perhaps an another source for the confusion...

    – jomegaA Mar 31 '22 at 09:06
  • 1
    For a single sequence zero-padding in one domain preserves or distorts the phase, depends on zero-padding at the center or at the end. – jomegaA Mar 31 '22 at 11:37
  • For the convolution of two sequences of different lengths, 1) what zero-padding scheme will make sure that lengths of output is equal to lengths of input. y = ifft(fft(x) $\cdot$ fft(h) ) where x and y are input and output respectively – jomegaA Mar 31 '22 at 11:52
  • Sorry, but the more I read this, the more flawed it looks. Padding must occur outside the signal, not between samples, else we're changing the signal itself. Even assuming you meant x2 x3 0 0 x0 x1, I've had a similar question earlier (see revision history), and would have found this answer extremely convoluting and misleading. I don't see why we need to invoke all these additional continuous-time concepts to explain a simple discrete algorithm. -1 – OverLordGoldDragon Mar 31 '22 at 17:14
  • @OverLordGoldDragon it is the implication of the signal and what purpose we are using it—- here the implication is the upper half is identically the negative indexes due to the circular indexing and if our intention is to not change the low index value samples in the related transform (specifically not add the delay in the transform) then we can pad in the center to accomplish that. The rest of the details just add intuition for those familiar with sampling and signal processing both in the analog and digital domains but Hotpaws response is concise and accurate to the point here – Dan Boschen Mar 31 '22 at 17:17
  • This is simply wrong. Yes it's correct about DFT but not about convolution, I show an explicit example in my answer. The "equivalent reindexing" alters the result of convolution. – OverLordGoldDragon Mar 31 '22 at 17:21
  • 1
    I will clarify what my answer is at the start of it, I don’t think that it is incorrect but we might be referring to two completely different things. The question is why we would pad in the center of a sequence and I show the good reasons for doing this and the related considerations; I don’t believe any of that is wrong or flawed. (??) – Dan Boschen Mar 31 '22 at 17:33
2

The phase zero reference of most all FFT implementations is the first element of the input vector.

If you want the data to remain continuous at and around the phase zero reference, and (circularly) symmetric around the phase zero reference (so that the even portion of the input is represented in the real or cosine result component, and the odd portion is represented in the imaginary or sine result component), then the padding needs to be (circularly) opposite the phase zero reference. Thus around the middle of the input vector, which is (circularly) opposite the first element. Then this padding won't distort the phase (or the even/odd ratio of the input). As it would if the padding were not symmetrically opposite, but unsymmetrically to one side or the other of the phase zero reference.

All the circularly stuff is because all the basis vectors of the FFT are circularly symmetric. But starts with a phase of zero at the first element (and wraps both ways, forward and backwards around the vector, circularly), in the canonical FFT and IFFT implementations.

If you are doing a DFT interpolation, you might not want to corrupt the phase. (e.g. if you want a real and even input to produce a strictly real result, etc.)

hotpaw2
  • 35,346
  • 9
  • 47
  • 90
  • Another trivial question, If I want to convolve (fast convolution) the input with a finite impulse response of a system and also keep the phase information in-fact, should I pad zeros at the center for both the input sequence and as well as impulse reponse? – jomegaA Mar 30 '22 at 07:29
  • -1 for "if you want convolution results to be correct" - padding location does not affect the unpadded result of convolution. – OverLordGoldDragon Mar 30 '22 at 16:06
  • @OverLordGoldDragon Edited and I believe a good answer now as written no longer deserving of a downvote? – Dan Boschen Mar 31 '22 at 10:51
  • @DanBoschen The answer explains too little for how many upvotes it has, so I'm keeping it – OverLordGoldDragon Mar 31 '22 at 17:15
1

I think the other answers unnecessarily obfuscate the matter.

There's nothing special about center padding for sake of convolution. We can pad at any other location, and the unpadded result will be exactly the same - all that changes is the unpadding indices.

Dan answers the more general question of the purpose of center-padding the DFT, which is DFT-upsampling. An alternate perspective is that it's the inverse of subsampling. Regardless that's immaterial to convolution, which is the question's context.

The algorithm

Taking another look, here's what it does

  1. Generate W

  2. A = [conj(W), 0, flip(conj(W))]

  3. B = x * W

  4. Right-pad both A and B to length L (done by fft(, L)). Note, x is length N, and so is A (or just suppose it is, I can't confirm because the page won't load for some reason)

  5. out = ifft(fft(A_rightpad) * fft(B_rightpad))

  6. out = out[:N]. Unpad is 0 to N because we convolved over x * W which was from 0 to N. The "center padding" simply centers the convolving kernel at 0, such that

    • out[0] corresponds to sum(B * A_centered_at_0)
    • out[1] corresponds to sum(B * A_centered_at_1)
    • ...

That's all. It's not about phase or interpolating or upsampling, especially if the example is x0 x1 x2 x3 -> x0 x1 0 0 x2 x3, which changes the very input. Suppose the convolving kernel is two samples long, [h0 h1], and we look at direct convolution where no padding is needed:

conv_at_1_version_1 = x0 * h0 + x1 * h1
conv_at_1_version_2 = x0 * h0 + x1 * h1

conv_at_2_version_1 = x1 * h0 + x2 * h1 conv_at_2_version_2 = x1 * h0 + 0 * h1

Padding must be done outside the sequence, not between the samples.

This is not padding

x0 x1 0 0 x2 x3

just to be crystal clear.

This is not padding, animated

Convolve $x$ with $h$, where

enter image description here

x2 x3 0 0 x1 x2
x0 x1 0 0 x2 x3

Minimal step-by-step

N=4, L=8, len(W)=2

# out[0]
x0 x1 x2 x3 x4 0  0  0  0
w2 w3 0  0  0  0  0  w0 w1
# out[1]
x0 x1 x2 x3 x4 0  0  0  0
w1 w2 w3 0  0  0  0  0 w0
...

which is

out[0] = x0*w2 + x1*w3
out[1] = x0*w1 + x1*w2 + x2*w3
...

so what we unpad in the end, 1 to N, will contain the centerings of the convolution kernel (concat conj W etc) over the input.

Code ("pad location doesn't matter")

import numpy as np
from numpy.fft import fft, ifft

for N_seg in (8, 9, 10): x0 = np.random.randn(N_seg) + 1j * np.random.randn(N_seg) x1 = np.random.randn(N_seg) + 1j * np.random.randn(N_seg) z = np.zeros(N_seg) y = np.random.randn(N_seg4) + 1j np.random.randn(N_seg*4)

xp0 = np.hstack([x1, z, z, x0])  # zeros centered
xp1 = np.hstack([z, x0, x1, z])  # signal centered
xp2 = np.hstack([x0, x1, z, z])  # zeros right

out0 = ifft(fft(xp0) * fft(y))
out1 = ifft(fft(xp1) * fft(y))
out2 = ifft(fft(xp2) * fft(y))

assert np.allclose(out0, np.roll(out1, 2*N_seg))
assert np.allclose(out0, np.roll(out2, 3*N_seg))

OverLordGoldDragon
  • 8,912
  • 5
  • 23
  • 74
  • Attached a small matlab code snippet as my answer and now I am totally confused. – jomegaA Mar 30 '22 at 18:44
  • I added a simple example to the beginning of mine- what am I missing? You only padded one of the two sequences, so makes sense it would only be a rotation in this sense (which is the linear phase in the other domain). I think that rotation is the point of the question/answer-- where we pad rotates the result (delay in time of linear phase in freq). Not sure there is any disagreement here. I think that was hotpaws point as well.. – Dan Boschen Mar 30 '22 at 21:38
  • @DanBoschen (cont'd) Let's make it fully explicit: are you saying $[x[0], x[1], 0, 0, x[2], x[3]]$ is padding? where $x[1]$ is the observation/measurement that follows $x[0]$. – OverLordGoldDragon Apr 01 '22 at 16:23
  • We could possibly call it "stuffing" although I typically associate stuffing with placing 1 or many zeros in between every sample in contrast to padding where it is typically at the end of the sequence. Terminology may need to be improved and best with an associated definition. For this comment let me temporarily (and humorously) call it pad-stuffing with the salient point of the question that in DSP we do occasionally choose to pad-stuff with purpose and it has implications to the circular shift of the signal in the other domain with no other change when working with DFT and inverse DFT. – Dan Boschen Apr 01 '22 at 16:34
  • Where "circular shift" is a delay in the time domain and a frequency translation in the frequency domain. – Dan Boschen Apr 01 '22 at 16:35
  • So I think you showed that here, which is why I believe we aren't in any way disagreeing. I am just not sure this answers the question; which I keyed on as asking "when do we pad in the center vs at the end" fully understanding what the OP meant by "pad" in this context. Maybe I misunderstood the question, I am always open to that! But that is what I answered specifically, and I don't think it is wrong or misleading as you proposed (unless I did something incorrect in the examples; I am totally open to that and would want to fix it if I knew what it was). Pointing that out would be helpful. – Dan Boschen Apr 01 '22 at 16:38
  • @DanBoschen There's only one correct "center padding": $[x[2], x[3], 0, 0, x[0], x[1]]$. Else it's equivalent to inserting silence in an audio file, like between letters of a spoken word, it fundamentally changes the signal. In the "general case" of center padding, we must first fftshift the input and then center-pad - big difference - but what you wrote suggests omitting this step. – OverLordGoldDragon Apr 01 '22 at 16:44
  • No I don't agree that it is necessary- they are rotationally equivalent and depends what we want the first sample to represent as far as its position in time (which is what Hotpaw was saying). When the first sample represent n=0 in time (or k=0 in frequency) and the last sample represents n=-1 or equivalently n = N-1 then we would pad just as I wrote, and we maintain that equivalence. If you want to change the first sample to represent n= -2 in your example, then you would do what you suggested. Neither is "wrong". – Dan Boschen Apr 01 '22 at 16:57
  • @Dan pictures ‎ – OverLordGoldDragon Apr 01 '22 at 17:33
  • Very nice (+1) I always like your animations- the bottom graphic would be what would happen if we didn't interpret the first sample in the sequence to be at n=0 (as typically done in the presentation of the samples of the DFT and IDFT). That is not what I was presenting. I'm about to travel for the weekend but will review to see if I can make that point clearer. – Dan Boschen Apr 01 '22 at 18:37
  • @Dan Glad we agree that the bottom one is undesired. We still appear to disagree on the x0 x1 stuff though (I say first at n=0 yields the bottom one); I'd encourage you try a hands on example, but I could add one later. Safe trip. – OverLordGoldDragon Apr 01 '22 at 18:50
  • 1
    Ok Understood - I am on my phone but will look closely at what you are referring to in my answer. Thanks – Dan Boschen Apr 01 '22 at 19:10
  • @OverLordGoldDragon you wrote in dispute of my explanation that the zero padding that is done is not about interpolation or upsampling (which is the frequency analogy for what we are doing and just that in the time domain). This is exactly what it is about based on the need to implement linear convolution instead of the circular convolution it would be. By doing it as done it centers n=0 (which means add no delay) and interpolates the frequency domain which means stretches the time domain. It is all the same- these are different ways to explain the same thing. – Dan Boschen Apr 01 '22 at 21:42
  • @DanBoschen It's interpolation and upsampling of an underlying function that implicitly swaps two halves of the input, effectively breaking the sequence. It is the second visual. – OverLordGoldDragon Apr 02 '22 at 17:18
  • So no more disagreement then? – Dan Boschen Apr 02 '22 at 17:23
  • What you should also mention to be complete is how padding the sequence when using FFT for fast convolution results in the same result as linear convolution; and this is where the padding in the middle of the sequence is done in the referenced algorithm. – Dan Boschen Apr 02 '22 at 17:40
  • 1
    @DanBoschen Apparently I've misread your negative indexing... yes, we seem to agree. But as written your answer reads as "truth in details" as in "devil in details", there's many statements and notation that suggest the fallacy of center padding a sequence like $[x[0], x[1], 0, 0, x[2], x[3]]$, rather than $x[-1], x[-2]$ for the last two. I guess you didn't object to this example earlier because you thought I referred to the array x rather than a discrete-time formula - but that's part of my problem with your notation, I don't know why not just write $[x_0, x_1, 0, 0, x_{-1}, x_{-2}]$ – OverLordGoldDragon Apr 02 '22 at 17:44
  • @DanBoschen When we talk padding, we talk about something like np.pad(x, pad_width), where x is our actual signal we want to pad, is it not? Inserting zeros in the middle of that sequence is wrong, and is what your answer suggests unless read very carefully. You speak of circshift + pad, not pad, and this should be made very explicit. Here, for one, you literally wrote "in the middle of a sequence". – OverLordGoldDragon Apr 02 '22 at 17:47
  • @DanBoschen Fair suggestion @ linear, figured it more a side point but edited in for clarity. – OverLordGoldDragon Apr 02 '22 at 17:53
  • @DanBoschen Actually I recalled that linear convolution implements actual convolution, as in, impulse response of a system, which means output's longer than input. Two key differences with what goes on here: 1) out len == in len, 2) $h$ is centered at each $x_i$, i.e. we don't have h[0] * x[i] + x[1] * x[i + 1] .... So I wouldn't call it linear? – OverLordGoldDragon Apr 02 '22 at 22:52
  • We zero pad and the result is the output is longer than the non zero padded input. If we don’t zero pad (and do it with the fast convolution using FFT’s) the result is circular convolution. We just need to pad long enough otherwise the tails will still wrap. – Dan Boschen Apr 02 '22 at 23:01
  • @DanBoschen But len(out) == len(in) in this algorithm – OverLordGoldDragon Apr 02 '22 at 23:02
0
close all;
clear all;

x = [2 5 7 9 6];
h = [1 4 7 3 7];

y0 = conv(x, h)

N = length(x);
M = length(h);

L = N + M - 1;
L = 2^nextpow2(L);

xzp = [x, zeros(1,L-N)];
hzp = [h, zeros(1,L-M)];

y1  = ifft( fft(xzp).*fft(hzp) )
y2  = conv(xzp, hzp)

xcp = [x(1:ceil(N/2)), zeros(1,L-N), x(ceil(N/2)+1:N)];
hcp = [h(1:ceil(N/2)), zeros(1,L-N), h(ceil(N/2)+1:N)];

y3  = ifft( fft(xcp).*fft(hcp) )
y4  = conv(xcp, hcp)
jomegaA
  • 659
  • 3
  • 16