I have been designing a bandpass filter with a passband between $95\textrm{ kHz}$ and $105\textrm{ kHz}$ (using MATLAB's fdatool and scipy.signal.remez in Python) and have noticed that as I increase the sampling frequency the number of coefficients needed to implement my filter increase. I was wondering why this is?
I presume it is either because the ratio between the band-pass size and the full frequency range (0 $\rightarrow$ Nyquist) gets lower and this causes it to be more difficult to achieve?
Or because the filter has to filter out a larger range of frequencies (e.g. for a $400\textrm{ kHz}$ clock it has to have coefficients that cause attenuation from $0\rightarrow 95\textrm{ kHz}$ and $105\textrm{ kHz}\rightarrow 200\textrm{ kHz}$ but for a $1\textrm{ MHz}$ clock it has to have coefficients that cause attenuation from $0\rightarrow 95\textrm{kHz}$ and $105\textrm{ kHz} \rightarrow 500\textrm{ kHz}$) and thus requires a larger number of coefficients to achieve?