So, I`m currently working on program for computing spectrogram in real-time, using signal from a microphone. And now I want to add "scaling" feature, that allows to look at the spectrum more closely without losing frequency resolution (example was created using simple DFT) :
This feature could be easily done using DFT algorithm ($DFT(freq) = \sum_{n = 0}^{N - 1}samples(n)*e^{-i*\frac{2\pi n}{N} * freq * scalingFactor}$), which also takes a lot of time to compute.
And, to increase performance of the program I`m using FFT algorithm. So the question is : How to implement the same feature in the FFT without severe loss of performance? Increasing number of samples for computing transform ($N$) doesn't help, because it decreases time domain resolution. I've also considered using the trick from DFT example : $FFT(freq) = fft_{even}(freq) + e^{-i*\frac{2\pi}{N}*freq*scalingFactor}*fft_{odd}(freq)$, what also didn't help.
Edit: answers to the question at FFT for a specific frequency range. actually provide methods to change FFT frequency domain values range, but (if I'm not mistaken) they require to compute Fourier transform two or more times (FFT and IFFT for applying filters) what increases computing time. So is there better way to solve this problem?
