Please verify my understanding that pitch detection precision is dynamic across a pitch range, whereas relatively higher pitches can be measured to increasingly higher precision, and vice versa. Based on this, I would like to conclude that pitch detection precision described as ± x cents is a technical simplification.
Given the logarithmic nature of pitch, the linear distance (in Hz frequency) between two adjacent semitones increases as you go up in pitch, and decreases as you go down. In other words, the resolution between notes is dynamic.
The logarithmic unit cent is used to cut the semitone into one hundred smaller pieces, which are "logarithmically equally spaced," but again take on different linear resolution based on the actual pitch.
The linear span between cents at lower pitches becomes smaller, and therefore relatively more difficult to measure. Likewise, the resolution of 1 cent at higher pitches is higher. As a rough visualization, it's like watching your lowest octave at 144P video quality and seeing it progess up the octaves to 4k video quality.
As the range of applicable pitches being measured by the hardware increases, so does this difference in precision across the range.
Is my understanding correct that hardware pitch detection precision is dynamic according to the pitch being detected, and that statements of precision as ± x cents are a simplification?