As an amateur astronomer for 60 years and always as interested in the science as in the observing, I thought I had a pretty good understanding of the fundamentals. But I'm distance-auditing a third-year course in the fundamentals of applied astronomy, and I found that I had a misconception about the color index.
I had assumed the CI was derived by measuring the star's own magnitude in each of, say, the B and V passbands, then subtracting to get the CI (e.g., by taking the ratio of flux intensities, expressed as a magnitude).
Not so! It's derived by measuring the star's magnitude relative to standard zero points (the magnitude of Vega at each of those wavelengths) that are defined to be magnitude 0 for those wavelength, then subtracting to get the CI.
My question is: Why measure two fluxes relative to a standard and then compare those two values, rather than simply measuring them relative to each other? Why not just measure the flux at each wavelength, take the ratio, express this as a magnitude, and use that as the CI? The relationships are the same either way, just with a different zero point (think 0K and -273°C). It seems like finding how far apart Cleveland and Chicago are by measuring the distance of each from NYC and subtracting, rather than just measuring the distance between them. I don't see that the convention accomplishes anything other than assigning a CI of 0 to "neutral" white stars. Or am I missing something?