The maximum of $x[n]$, where $X[k] = \texttt{DFT}\{x\}$, is
$$
\begin{align}
& \texttt{max}\{x\} =
\frac{1}{N}\underset{s\in[0, N-1]}{\texttt{max}}\left\{
\sum_{l=0}^{N-1} (X[k] \cdot e^{-j2\pi s k/N})[l]
\right\} \tag{1} \\
& k = [0, 1, ..., N - 1]
\end{align}
$$
or in code-like syntax,
$$
\texttt{max}\{x\} = \frac{1}{N}\texttt{max}\big\{
\big[\texttt{sum}\{\texttt{FFT}\{x\} \cdot e^{-j2\pi s [0:N-1] / N}\}
\ \text{for}\ s=[0:N-1]\big]\big\} \tag{2}
$$
The DC bin $ = \texttt{sum}\{x\}$. Per duality, $x[0]$ is the DC bin for the DFT: $x[0] = \texttt{sum}\{X\} / N$, for any $x$, where $1/N$ is the only difference between the domains. Hence, if we shift $\texttt{max}\{x\}$ to index 0, it's retrievable by summing $X$. We don't know where max is ahead of time, so we try all possible shifts $s$, and by definition the max will be the max of all sums (sum at shift $-s$ equals $x[s]$).
$\texttt{min}\{x\}$ is supported, and so are real and complex-valued inputs. In fact any $\texttt{func}\{x\}$ is supported, since we're quite literally retrieving $x$.
This is a very slow algorithm, and I don't know if it is reducible to a satisfactory speed. If we seek to fetch the maximum by working purely in frequency, or understanding the relationship, then it's one way. I found it useful for proving that "time aliasing" cannot cause peaks in convolution.
Code validation
import numpy as np
for func in (max, min, lambda x: max(abs(x))):
# N==1 works but requires more pure coding related stuff
for N in range(2, 129):
x = np.random.randn(N) + 1j*np.random.randn(N)
xf = fft(x)
exparg = -1j*2*np.pi*np.arange(N)/N
shifts = np.array([sum(xf * np.exp(exparg * s)) for s in range(N)])
out = func(shifts) / N
assert np.allclose(out, func(x))