$\newcommand{\ak}[1]{\langle #1 \rangle}$I've noticed that for linear operators $T$ and an inner product $ \ak{\bullet, \bullet }$, the expression $\ak{ Tv,v}$ tends to show up a lot. For instance, it shows up in the min-max theorem to tell us about all the eigenvalues of a sufficiently nice operator. Similarly the expression $\ak{Tx,y}$ (and its dual) appears in the very definition of self-adjointness (which appears by mixing together the Banach space definition of a dual operator + the Riesz representation theorem for Hilbert spaces).
I know in the finite dimensional case they correspond to quadratic forms $x^\top A x$, which appear in an absurd number of scenarios (see Why Study Quadratic Forms?), in particular optimization (2nd order coefficient of Taylor expansion), all the nice properties of definite matrices, and in number theory (starting from the very beginning of number theory, essentially motivating the entire field of algebraic number theory, and continuing to be very well regarded in the 21st century and beyond).
Even in "abstract harmonic analysis", we have such expressions popping up: for instance the fact that the "matrix coefficients" $\phi(x):= \ak{\pi(x)u,u}$ for some unitary representation of a locally compact Hausdorff group $G$ are EXACTLY the functions of positive type on $G$ (Prop. 3.15 in Folland's Abstract Harmonic Analysis) plays a crucial role in (Folland's proof of) the Gelfand-Raikov theorem.
In the abstract harmonic analysis case, one can (slightly) motivate this by saying that $\pi(x)$ is a complicated object, namely a unitary transformation of a Hilbert space; evaluating at some $u\in \cal H$ produces a vector $\pi(x)u$ which is less complicated, but still not as easy a complex number $\ak{\pi(x)u,v}$. Moreover since "comparing" $\pi(x)u$ against all vectors doesn't lose any information, so if one can understand $\ak{\pi(x)u,v}$ for all $u,v\in \cal H$, one can understand $\pi(x)$ (this is essentially the philosophy of the weak integral, which Folland uses to define $\pi(f) : L^1(G) \to \cal B(H)$ in $\S3.2$). Finally, by some sort of polarization identity, one can recover $\ak{\pi(x)u,v}$ from $\ak{\pi(x)u,u}$, so it suffices to understand those.
I can accept this motivation, but I can't accept the miraculous fact that such inner products $\ak{Tv,v}$ behave so nicely. E.g. in the above paragraph they produce positive type functions, which are very related (Prop. 3.35 in Folland) to positive definite matrices (which themselves are very related to such inner products); and as mentioned above they lead to amazing formulas for eigenvalues, nice theorems for optimization, and deep connections to algebraic number theory. Why should this vague notion of "comparing a vector to its transformed self" result in such a profound variety of interesting mathematics?
A "perfect answer" could lie along the lines of someone telling a somewhat cohesive and comprehensive general story about why and when these inner products $\ak{Tv,v}$ appear, to the extent that the special case of the consideration functions $\phi_u(x):=\ak{\pi(x)u,u}$ and their (1-1) connection to positive type functions/"positive definite functions" is no longer a miraculous leap of thought, but is instead met with "of course that's what one'd do!".
EDIT 4/30/23: they also have a physical interpretation in quantum mechanics https://physics.stackexchange.com/questions/146005/why-is-the-measured-value-of-some-observable-a-always-an-eigenvalue-of-the-co as the "expectation of an observable $A$ under the state $\psi$".