This complements the other answer, while also revealing its own modulus insights. It also said my body is too large.
The denominator; sampling rate vs duration
For convenience, discarding modulus,
$$
K = \frac{1}{\cos(2\pi f/N) - \cos(2\pi k/N)}
$$
For a given $f$, the left cosine is fixed and is within $[-1, 1]$. The right cosine sweeps $[-1, 1]$ once over $[0, N/2]$, and is guaranteed to approach left's value. For $f=3.5$, $N=32$, we have
where the x-axis is fractional, $k/N$. We add $K_{100}=K/100$ (rescaled to fit in same plot), in green:
where the dashed lines intersect $K_{100}$ at $k$ nearest to $f$, and is where the DFT is actually evaluated.
A helpful requisite for the following is "Addendum: Sampling Rate vs Duration" in the original article. We compare two scenarios, but we change our reference case to $f=3.25$:
Doubled sampling rate: $N \rightarrow 2N$, unchanged $f$ -- halved $f/N$
Absolute $f$ is unchanged. Samples are more tightly packed around any given $f$. For our $f=3.25$, the distances to $k=3$ and $k=4$ are approximately halved, hence the DFT is doubled.
To see the effect on all other $k$, we compare the $K$. Indeed, the new $K$ is nearly a simple shift and doubling of the previous $K$:
Orange is the previous $K_{100}$, orange dashed shifts it by $f/N_\text{new} = f/64$ and doubles it. Green is the current $K_{100}$. Over the $k$ closest to $f$, the realized values are very nearly same (i.e. doubled relative to previous). Over $k$ further out, they're also very close, except near DC. Note, near DC, the numerator also behaves differently, so this picture's incomplete.
Doubled duration: $N \rightarrow 2N$, $f \rightarrow 2f$ -- unchanged $f/N$
The denominator curve is completely untouched, as expected: its only dependence on $f, N$ is via $f/N$. But the denominator realization certainly changed: the tighter-packed $k$ samples it closer to its asymptote.
Why'd we have to double the curve before? Before, we moved in fractional space, hence also along the sweep of the numerator, which changes most rapidly near DC/Nyquist. We're double-dipping by sampling closer to asymptote and doubling all values, but that's to cancel the numerator being lower there (which, without peeking could also be higher, but we know the end-result).
Of course, the DFT certainly changes, and in this case this change is explained entirely by the numerator, whose curve has $f/N$- and $f$-dependency, latter being much more dominant.
$\Uparrow$ Duration $\rightarrow$ $\Uparrow$ Resolution [DR]
The numerator is too weak to influence resolution: the objective is to separate $f_0, f_1$, which by problem definition are apart by less than $[k, k + 1]$, over which the numerator is nearly constant.
From the previous plot, the case for duration is obvious: for another red line packed between blues in $N=32$, going to $N = 64$ inserts black dots between existing black dots:
Yet, with sampling rate, only $N$ increases, and the red lines grow closer in ratios to compensate for tighter black dots:
Mathematically: $|f_1/(2N) - f_0/(2N)| < |f_1/N - f_0/N|$, with factoring that's $1/2 < 1$, so true for all $f_0, f_1, N$. Of course what calls the shots is $\cos(2\pi f/N)$, but that's monotonic over $f/N \in [0, 1/2]$, hence the inequality holds for all $f_0, f_1, N$ (applying conjugate symmetry for intervals other than $[0, 1/2]$).
Proof: $N/4$-symmetry, even $N$
A symbolically-incomplete proof follows; code validates fully. The math is for modulus alone, but for non-modulus it's yet easier and follows immediately from the even/odd status of $U, V, k$.
First, write out the exact sign- and sine-status of $U$ and $V$, respectively, where $a, b, c$ are shorthands:
$$
\begin{alignat*}{1}
N=0&:\ &\cos&(a \Delta f) &- 1,
\ &\sin&(b \Delta f) &- \sin(c \Delta f)\\
N=1&:\ -&\sin&(a \Delta f) &- 1,
\ &\cos&(b \Delta f) &- \sin(c \Delta f) \\
N=2&:\ -&\cos&(a \Delta f) &- 1,
\ -&\sin&(b \Delta f) &- \sin(c \Delta f) \\
N=3&:\ &\sin&(a \Delta f) &- 1,
\ -&\cos&(b \Delta f) &- \sin(c \Delta f) \\
N=4&:\ &\cos&(a \Delta f) &- 1,
\ &\sin&(b \Delta f) &- \sin(c \Delta f)\quad (N = 0) \\
&...
\end{alignat*}
$$
From this, we write the symmetry statuses:
$$
\begin{alignat*}{1}
N &&& U & V
& UV &\ U^2 + V^2 &\ U^2 + V^2 - 2UV\cos\left(\frac{2\pi k}{N}\right)
\\
0:&\ & \text{even} &+ \text{even},\ \text{odd}\ &+\ \text{odd},\ \
& \text{odd}, \ \ \ & \text{even}, \ \quad & (\Delta k, \Delta f)\text{-even}
\\
1:&\ & \text{odd} &+ \text{even},\ \text{even}\ &+\ \text{odd},\ \
& \text{neither},\ \ \ & \text{neither},\ \quad & \text{neither}
\\
2:&\ & \text{even} &+ \text{even},\ \text{odd}\ &+\ \text{odd},\ \
& \text{odd}, \ \ \ & \text{even}, \ \quad & (\Delta k, \Delta f)\text{-even}
\\
3:&\ & \text{odd} &+ \text{even},\ \text{even}\ &+\ \text{odd},\ \
& \text{neither},\ \ \ & \text{neither},\ \quad & \text{neither}
\\
4:&\ & \text{even} &+ \text{even},\ \text{odd}\ &+\ \text{odd},\ \
& \text{odd}, \ \ \ & \text{even}, \ \quad & (\Delta k, \Delta f)\text{-even}
\\
&...
\end{alignat*}
$$
Lots of text, one by one:
- "$\text{even}$" and "$\text{odd}$" are with respect to $\Delta f$ - so $\text{even}$ indicates $g(-\Delta f) = g(\Delta f)$, and $\text{odd}$ indicates $-g(-\Delta f) = g(\Delta f)$, for some $g$ (cosines in $U, V$)
- $V$ for $N=4$ is $\text{odd} + \text{odd}$ ($=\text{odd}$) because, $V = \cos(4\cdot\pi/2 + 2\pi\Delta f - \pi/2 - 2\pi(\Delta f/4)) - \cos(-\pi/2 -2\pi(\Delta f/2))$, which is $V = \sin((3/2)\pi\Delta f) - \sin(-\pi\Delta f)$. For other $N$, the second cosine stays $\text{odd}$ since it's independent of $N$, but the first becomes $\text{even}$ for odd $N$ per the $N\pi /2$ term.
- Identical analysis for $U$'s cosine, and the $-1$ is constant in $\Delta f$ and independent of $N$, and a constant is $\text{even}$.
- $(\text{even} + \text{even}) = \text{even}$, $(\text{even} + \text{odd}) = \text{neither}$, $(\text{odd} + \text{odd}) = \text{odd}$
- $(\text{even})\cdot(\text{even}) = \text{even}$, $(\text{even})\cdot(\text{odd})=\text{odd}$, $(\text{odd})\cdot(\text{odd})=\text{even}$
- Squaring is same as multiplying, hence $V$ that's $\text{odd}$ becomes $\text{even}$, but $\text{neither}$ stays
- The $-$ and $2$ have no effect (they're equivalently $\text{even}$, by which multiplying retains status)
$(\Delta k, \Delta f)$-$\text{even}$ is the goal, takes a bit more work to show. Begin with $\Delta k$, then join with $(\Delta k, \Delta f)$:
- Imagine sweeping $k$ from $0$ to $N/2$, except instead we increment by $\Delta k$ to left or right of $N/4$, so $k = N/4 + \Delta k$. Recall, $U, V$ are $k$-independent. For any given $U, V$, we take our offset, $U^2 + V^2$, and from it subtract $2UV\cos(2\pi k/N)$. The cosine term is odd in $\Delta k$: for $\Delta k = 0$, the cosine is zero, for $\Delta k > 0$, it grows in negatives, for $\Delta k < 0$, it grows mirrored in positives. It can be shown explicitly by plugging in $k = N/4 + \Delta k$, which gives $-\sin((2\pi/N)\Delta k)$. Hence, the full term is odd in $\Delta k$.
- Since the offset is $\text{even}$ in $\Delta f$, this part's taken care of. To show that $-2UV\cos(2\pi k/N)$, or rather $UV\cos(2\pi k/N)$, is even in $(\Delta k, \Delta f)$, is to show - let $U = u(\Delta f)$ - $u(\Delta f)v(\Delta f)\sin((2\pi/N)\Delta k) = u(-\Delta f)v(-\Delta f)\sin(-(2\pi/N)\Delta k)$. This is shown by observing that each sub-equality is true within a minus sign - that is, separately for $u(\Delta f)v(\Delta f)$ and for $\sin((2\pi/N)\Delta k)$. The sub-equalities are true within a negative since they're odd, and since there's two sub-equalities and two negatives, the negatives drop and yield equality in time reversal, or even-ness.
All together:
$$
\boxed{
U^2 + V^2 - 2UV\cos(2\pi k/N) \\
= \\
u^2(\Delta f) + u^2(\Delta f) + 2u(\Delta f)v(\Delta f)\sin((2\pi/N)\Delta k) \\
= \\
u^2(-\Delta f) + u^2(-\Delta f) + 2u(-\Delta f)v(-\Delta f)\sin(-(2\pi/N)\Delta k)
}
$$
Same can be shown replacing $(\Delta f)$ with $(\Delta f, \phi)$ everywhere above; it's a change of variables (but not a direct substitution).
Lastly, we've neglected two parts of $|X[k]|$. Its denominator: realizing it's entirely controlled by how $f$ and $k$ differ, and inspecting sweeps to left and right of $N/4$, its symmetries easily follow; it also equals $-2\sin\left(\frac{\pi}{N}(\Delta f - \Delta k)\right)\cos\left(\frac{\pi}{N}(\Delta f + \Delta k)\right)$. And, the square-root: one-to-one pointwise operators don't change symmetry status.
From Hermitian symmetry, identical status about $-N/4$, i.e. $N - N/4$, immediately follows.
What of (complex) $X$? It's conjugate even-symmetric, validated in code. It's likely even easier to prove, only need to consider $U, V, e^{j...}$ and the denominator, but I realized this later.
Why even $N$? Odd $N$'s cosine is only symmetric (of any kind) about $N/2$, hence none of the dependencies - $k, f, \phi$ - have symmetry.
Odd $N$, approximate $\lfloor{N/2\rfloor}/2$-symmetry
The larger the $N$ the better, eventually becoming float-equal, and the convergence on the cosine is rapid: $(N, \texttt{rel_l2}) = $ $(101, p10^{-2}), (1001, p10^{-3}),$ $(10001, p10^{-4})$, where $p=3.1$ (coincidental) and $\texttt{rel_l2} = \|x_0 - x_1\| / \|x_0\|$. But, the symmetry is over $\lfloor{N/2\rfloor}/2$, and the distinction from $N/4$ is significant. For $N = 10001$ and $(\Delta f, \Delta \phi) = (50.3, 0.145)$, it's $1.2 \times 10^{-6}$ for $|X|$ and $2.3 \times 10^{-6}$ for $X$, so much better than the formula's cosine by itself.
Why? The continuous cosine is symmetric just fine, sampling makes the mess.
Proof: shifting sinusoidally modulates $X$ with frequency $f$
To see exactly what's happening, we find $X_\tau[k]$ and express it in a form as closely as possible to $X[k]$, where $X[k] = \texttt{DFT}\{\cos(2\pi f t + \phi)\}$ and $X_\tau[k] = \texttt{DFT}\{\cos(2\pi f (t + \tau) + \phi)\}$.
First, simplify $X_\tau$ by realizing it's simply $\texttt{DFT}\{\cos(2\pi f t + \phi_\tau)\}$, where $\phi_\tau = \phi + 2\pi f \tau$. Hence, $U, V$ are:
$$
\begin{align}
& U = \cos(2\pi f + \phi + 2\pi f\tau) - \cos(\phi + 2\pi f \tau) \\
& V = \cos(2\pi f + \phi + 2\pi f\tau - 2\pi f/N) - \cos(\phi + 2\pi f\tau - 2\pi f/N)
\end{align}
$$
$\tau$ is the only variable, so all else becomes a phase shift, and we have
$$
\begin{align}
& U = \cos(2\pi f \tau + l_0) - \cos(2\pi f \tau + l_1)\\
& V = \cos(2\pi f \tau + l_2) - \cos(2\pi f \tau + l_3)
\end{align}
$$
Sum of sines of same frequency is another sine of same frequency, hence
$$
\begin{align}
& U = a\cos(2\pi f\tau + p)\\
& V = b\cos(2\pi f \tau + q)
\end{align}
$$
We seek to simplify $Ue^{j2\pi k/N} - V$. That's
$$
\begin{align}
a\cos(2\pi f\tau + p)s[k] - b\cos(2\pi f\tau + q)
\end{align}
$$
where $s[k] = s_\text{re}[k] + j s_\text{im}[k]$ is cisoid's shorthand. Realizing $V$ is real-valued, we thus have
$$
\begin{align}
\Re e\{X_\tau[k]\} &= \frac{1}{2}K[k]
\bigg(a\cos(2\pi f \tau + p) s_\text{re}[k] -
b\cos(2\pi f\tau + q) \bigg) \\
\Im m\{X_\tau[k]\} &= \frac{1}{2}K[k]
a\cos(2\pi f \tau + p) s_\text{im}[k]
\end{align}
$$
"Modulation" is multiplication, so we seek ratios: take a shift $\tau$ and divide by the $\tau=0$ case:
$$
\begin{align}
\frac{\Re e\{X_\tau[k]\}}{\Re e\{X_0[k]\}}
&= F[k] \bigg(a\cos(2\pi f \tau + p)s_\text{re}[k]
- b\cos(2\pi f\tau + q)\bigg) \\
\frac{\Im m\{X_\tau[k]\}}{\Im m\{X_0[k]\}} &=
G\cos(2\pi f\tau + p) \\
\end{align}
$$
which works wonders for the imaginary part, but not real.
A simplification for the real part does arise if we're tracking just one $k = k_0$: this combines the sines and produces a sine whose amplitude and phase are $k_0$-dependent. We simplify notation by replacing $X_0$ with $X$, since $X$ already refers to the general $x(t) = \cos(2\pi f t + \phi)$, and avoid referring to any previous placeholders, only tracking whether they're $k_0$-dependent. Lastly, to express the result in samples (recall how $t$ is defined), we use $\tau/N$ instead of $\tau$ in all above equations, which remains a simple substitution in the end result. All together:
$$
\boxed{
\begin{align}
\Re e\{X_\tau[k_0]\} &=
\Re e\{X[k_0]\} \cdot
L_0 \cos(2\pi f \tau/N + p_0), \\
\Im m\{X_\tau[k]\} &=
\Im m\{X[k]\} \cdot L_1 \cos(2\pi f\tau/N + p_1), \\
(L_0, &\ p_0) = \texttt{f}\{f, N, \phi, k_0\},\
(L_1, p_1) = \texttt{g}\{f, N, \phi\} \\
\end{align}
}
$$
where $\texttt{f}, \texttt{g}$ are symbolic shorthands for functions that return constants, with constants' respective dependencies passed as arguments. If desired, closed form expressions for $L_0, L_1, p_0, p_1$ aren't too hard to find, but here our goal was just showing the effect of shifting by $\tau$ on the spectrum, which we've accomplished.
Interpretation summary
As a sine is being shifted in time by $\tau$,
- All imaginary bins are modulated by a sine of frequency $f$ as a function of $\tau$, with modulation amplitude and offset that depends on the sine.
- A given real bin modulated by a sine of frequency $f$ as a function of $\tau$, with modulation amplitude and offset that depends on the sine and the bin.
where "offset" = fixed phase, and "the sine" = original, unshifted $x(t)$.
Example
I took non-nice $f, N, \phi, \tau$ and produced the following:
Red = positive, blue = negative, white = zero, and each plot is color-normed to its own maximum, with |max red| = |max blue|. The $x(t - \tau)[n]$ is to emphasize that it's a sampling of a shift, not a shift of sampling. Confirming the modulation's frequency matches input's:
where the overlap in right plot is imperfect because it's a circular shift of an arbitrarily long segment. Same's confirmed for R.
Without writing a whole bunch of stuff, let's just say that the odds of these plots being a happy coincidence are zero. The real part may look like it has tilted white lines, hence not pure sine along $\tau$, but it's an optical illusion that's revealed by plotting the rows/columns individually in 1D. I don't validate further elsewhere, but code to reproduce the plots is included in "Code validation".
Proof: shifting sinusoidally modulates $|X|^2$ with frequency $2f$
Pursuing $|X_\tau|/|X|$ here will be a mistake (explained later); instead, we go straight for $|X_\tau|$. Following the previous proof, we have
$$
\begin{align}
\Re e\{X_\tau[k]\}^2 &=
\Re e\{X[k]\}^2 \cdot
L_0^2 \cos^2(2\pi f \tau/N + p_0), \\
\Im m\{X_\tau[k]\}^2 &=
\Im m\{X[k]\}^2 \cdot L_1^2 \cos^2(2\pi f\tau/N + p_1)
\end{align}
$$
and of course
$$
|X_\tau[k]|^2 = \Re e\{X_\tau[k]\}^2 + \Im m\{X_\tau[k]\}^2
$$
Since we give up on $k$-independence (explained later), we rid of $\Re e\{X[k]\}^2$ by merging it with $L_0^2$, and likewise for $L_1$, yielding
$$
D_0^2\cos^2(2\pi f\tau/N + p_0) + D_1^2\cos^2(2\pi f\tau/N + p_1),\ \tag{x} \\
D_0 = \Re e\{X[k_0]\}L_0,\ D_1 = \Re e\{X[k_0]\}L_1
$$
which expands to
$$
\frac{1}{2}(D_0^2\cos(4\pi\tau/N + 2p_0) + D_1^2\cos(4\pi\tau/N + 2p_1) + D_0^2 + D_1^2) \tag{y}
$$
which simplifies to
$$
D\cos(4\pi\tau/N + 2p_0) + \cos(4\pi\tau/N + 2p_1) + E
$$
which simplifies to, in finalized form,
$$
\boxed{
|X_\tau[k_0]|^2 = A\cos(2\pi 2 \tau/N + p) + B,\\
A, B, p = \texttt{f}\{f, N, \phi, k_0\},\ |A| \leq B
}
$$
Again, the utility is in tracking individual bins - we don't care what $A, B$ are, as long as not $\tau$-dependent. (Note, $\texttt{f}$ here's different from earlier)
The $|A| \leq B$ is forced from $(\text{x})$, and experimentally $|A| \ll B$ and $|A| \approx B$ are possible. It's forced since $(\text{x}) \rightarrow (\text{y})$ is conditioned upon it: $(\text{x})$ is only a sum of square sines, which is another square sine with offset, and not sine modulus (no "wrapping").
Why not derive for $|X_\tau| / |X|$?
I've tried, and the result doesn't show sinusoidal dependence on $\tau$ (though still has period $2\pi/(2\tau)$), while experimentally it is sinusoidal. The expression also includes the reference's bins, $|X[k]|$. Combined, these suggest we've compressed too much into $L_0, L_1, p_0, p_1$ that'd otherwise allow cancellation and obtaining of sinusoidal behavior. We could've also originally taken the path of deriving just for $X_\tau$, but we'd be unable to show any $k$-independent behavior (which we have for the imaginary part). Since we've already established the real part isn't $k$-independent, it of course follows for modulus.
Proof: shifting modulates energy ratios with period $2\pi / (2f)$ (sometimes near-sinusoidally)
Leakage can be measured this way. Define "energy ratio" as
$$
\texttt{ER}_I\{X_\tau\} =
\frac{ \sum_{k\in I}|X_\tau[k]|^2 }
{ \sum_{k\notin I}|X_\tau[k]|^2 }
$$
where $I$ is an interval or set of indexes, e.g. $[0, 1, 2, 5, 8]$. So, we're dividing energy of $X_\tau$ over some indices, by energy of $X_\tau$ over all other indices (non-overlapping).
Following the previous proof, we write out the sums for a dummy case, $N=4$:
$$
\texttt{ER}_I\{X_\tau\} =
\frac{ A[0]\cos(4\pi\tau/N + p[0]) + A[3]\cos(4\pi\tau/N + p[3]) + B[0] + B[3]}
{ A[1]\cos(4\pi\tau/N + p[1]) + A[2]\cos(4\pi\tau/N + p[2]) + B[1] + B[2]}
$$
which, by same logic as always, and for any $N, I$, reduces to
$$
\boxed{
\begin{align}
& \texttt{ER}_I\{X_\tau\} =
\frac{C_0\cos(2\pi2\tau/N + q_0) + D_0}
{C_1\cos(2\pi2\tau/N + q_1) + D_1} \\
& C_0, C_1, D_0, D_1, q_0, q_1 = \texttt{f}\{f, N, \phi, I\} \\
& |C_0| \leq D_0,\ |C_1| \leq D_1
\end{align}
}
$$
where the $C, D$ constraints follow by necessity ($\texttt{ER}$, numerator, and denominator are all $\geq 0$; also see previous proof). $\texttt{ER}$ is the most sinusoidal if $|C_1| \ll D_1$, and due to the constraints, it is the only criterion: sinusoid-ness is measured by $|D_1 / C_1|$ (explained below).
Sinusoidness criteria: Suppose $|C_1| \ll |D_1|$, and use $|D|$ for reading ease; then:
$|C_0| \approx |D_0|$: numerator is very sinusoidal, denominator is very constant: ratio is very sinusoidal
$|C_0| = |D_0| / 2$: denominator unaffected (still very constant)
$|C_0| \ll |D_0|$: numerator and denominator are very constant. Breaking up the fraction into its two numerator terms, we see left (one with $C_0$) vanishes, right is a big constant divided by a sine with big DC offset. The result is very sinusoidal - and I'm unsure why.
If $|C_1| \approx |D_1|$, we end up with a $U, V$ situation: the denominator dominates, and no matter what happens with $|C_0|, |D_0|$, the ratio's never sinusoidal. Ratio tends to $\sec(x)$.
Example
Left vs right: same $f, N$, different $I$:
- Top row: $\texttt{ER}$ (blue) -- mean of sine (orange) -- sine, trimmed (green) -- each is zoomed ideally onto itself, so not drawn to scale relative to each other
- Middle row: $\texttt{ER}$ numerator (blue) -- $\texttt{ER}$ denominator (orange) -- y-limits are zero to max, but different for left and right
- Bottom row: $\texttt{ER}$ denominator -- y-limits are same for left and right
We see that an individual bin modulates per same frequency as the sine, and that $\texttt{ER}$ always has double the frequency but is sinusoidal only if the denominator sine's minimum is close to zero. Note, these aren't formula-generated, they're results from an actual sine and its fft:
In general, there need not be much "meaning" attached, but in this case there is: left's interval is a superset (fully includes) of right's, and is x10 larger, and is centered around the DFT's peak. This is measuring leakage. Yet, from what we know so far, we cannot have predicted this result - we know left requires $|C_1| \approx |D_1|$, i.e. $|X|$ is strongly modulated away from its peak as a function of $\tau$. This would occur if the real and imaginary bins are modulated about equally strongly - yet the real bin's modulation is $k$-dependent, which we've not explored.
Though we've not explored it, we do have the earlier heatmaps: there, each heatmap's color maxima are calibrated with respect to their own inputs - reproduced here for convenience (left shows $\Re e\{X_\tau\} / \Re e\{X\}$, right for imaginary part, y-axis is $k$, x-axis is $\tau$):
However, if we set color min/maxima to be same...
away from peak, they indeed become ~same. Conversely, if we confine $I$ to be close to peak, the $\texttt{ER}$ becomes very sinusoidal, since now the denominator isn't strongly sinusoidally modulated per differering $\Re e$ vs $\Im m$ modulations.
$U, V$ sweeps
- Red = positive, blue = negative, white = zero
- $f$ is never integer (in linearly sweeping, if it were integer, I did
f *= 1.0001)
- All sweeps are linear; I tried log, I didn't find something interesting
- Plots don't interpolate and the first section is heavily aliased (but not second), but I found that interpolation interpolates incorrectly. The second section is more pixelated because the sampling grid is smaller to avoid aliasing (this time it's bad aliasing)
Fixed $N$, varying $f, \phi$:
Fixed $\phi$, varying $f, N$:
Heavy $\phi$-dependence near DC/Nyquist; time-domain perspective
A low-frequency sine, over a finite segment, if shifted, changes significantly. Imagine 1/8 of period of a sine - depending where exactly we're looking, it could be all near $0$, all near $1$, or $-1$. On the other extreme, imagine Nyquist - $[1, -1, 1, -1, ...]$ - shifting does nothing. For general high frequencies, shifting likewise doesn't much affect "how much sine" there is (energy).
How many periods (or cycles) "we're looking at", if the sine is plotted, is exactly what determines DFT's $f$, and not $f/N$. Low $f$'s effect is entirely due to duration deficit - high $f$'s effect is entirely due to sampling rate deficit.
High frequencies are affected, if near Nyquist. Nyquist and DC themselves are excluded (easily confirmed for DC). Plotting said frequencies reveals why - below. As to why the plot is this way - with real consequences for local energy - it's covered in detail here. Off-topic, this can validly interpret as "amplitude modulation": in short, if assuming bandlimited case and sticking with Fourier definitions of "bandlimited" and "frequency", then not - and "not" is generally best.
Addendum: Original version vs Modified
$$
\boxed{
\texttt{DFT}\{\cos(2\pi f t + \phi)\}_{f \notin \mathbb{Z}}[k] =
\frac{1}{2}\frac{1}{{\cos(2\pi f/N) - \cos(2\pi k/N)}}
\left[ \\
\ \ (\cos(2\pi f + \phi) - \cos(\phi)) e^{j 2\pi k/N} -
\left(
\cos(2\pi f + \phi - 2\pi f/N) - \cos(\phi - 2\pi f/N)
\right)
\right]
\\
}
$$
is what $U, V$ expand to. I showed the other version on top since this one looks a lot more complicated.
Though longer, this form is superior for analysis: each of $U, V$ can be understood nicely, and once so, it's analytically irreducible: $Ue^{j2\pi k/N} - V$.
The previous paragraph is what the reader should believe, temporarily. In reality, I found the other version only after I already finished all the work in this post and ones I reference, and I lack time to amend. I did, however, compare the two forms: in short, for analyzing spectrum behavior, the modified variant is mostly superior - for parameter recovery and other purposes, I can't speak there, but Cedron has used OV.
Let "OV" = original version, "MV" = modified version:
- Despite, expanded, the $UV$ formulation being much longer, there's only one numerator. The product with $\sin(\pi f)$ is the "second numerator"; once something's proven/understood for the numerator in $UV$ form, it's proven/understood period - not so with MV.
- MV has its own version of $UV$ we can write: $\sin(\pi f)[U'e^{j2\pi k/N} - V']$ - and the $U'$ and $V'$ are simpler.
- The $U'V'$ formulation is superior for $\phi$ analysis: $UV$ has four sines with $\phi$, $U'V'$ two.
- $UV$ or not, the sheer fewer number of things to track is bound to win sometimes (MV).
Revisiting all my work throughout the different posts, the MV would've made life easier in many places. Without saying much further, I don't rule it fully in favor of MV either.
Note, the second solution also has a different denominator, but I kept OV's denominator in MV: here I'm confident OV's is superior - it's a single $k$-dependent.
Simulations used
Messy, not meant for others, but if anyone's interested:
Citation
Same as in other answer, except the URL; if arbitrary (check your instructions), I prefer the other answer linked even if only this one contains what's cited.
Code validation
Linked in the other answer.