I want to detect chords on a guitar as early as possible, but my approach with a sliding window and a filter bank seems to introduce too much lag.
Would required observation time decrease by using a model where there are only a finite number of tones possible and only a finite number of simultaneous tones? (I.e. the different strings of the guitar).
I would suppose that for the system to not be underdetermined the number of samples would have to be at least as many as the number of guitar strings, and the time window would have to be at least on the same time scale as the tone with the shortest period. Or maybe the number of samples would have to be at least the same as the dimension of the model space (something like ~6 * 20)? And probably the amplitude resolution of the microphone together with the slowest frequency would set a constraint too?