2

what is the relationship between light intensity (I) and the 8bit value of my black&white render output?

The background is that I want to simulate a sensor that sends a signal every time the intensity at a point (pixel) changes a certain amount. The sensor should be invariant to lighting conditions so the real sensor measures photocurrent on a loglinear scale.

After reading this and this I think the sRGB scale is corrected in a similar way

But is the (8bit) grayscale value (for example rendered as a png file) then corrected in the same way? So if I want to check for loglinear changes in Intensity, I just compare the integer values?

I think yes, but I would really appreciate a second opinion

user3688217
  • 123
  • 3

1 Answers1

3

Light is remarkably predictable and, as with most energies, strictly a linear phenomenon.

If we have a pixel of emissive energy, and we use photographic terms, our ability to perform math is ridiculously simple. To increase the amount of light by double, or in photographic terms a stop, we simply double the value. Want to go down a stop of intensity or halve the quantity? Easy, halve the value.

Our visual system on the other hand, is strictly nonlinear. We sense the visual stimuli and uniquely bend the values to meet the needs of our perceptual system. In particular, we bend a very specific range such that we can detect gradations acutely, while sacrificing the darker and lighter regions our iris has dialed into view.

Given that our devices are significantly lower dynamic range than any physical scene, we cannot however simply pass radiometrically linear ratios of light to our display / output referred contexts. If we did so, the values would appear vastly too dark, even with darker viewing conditions.

To compensate for this complex phenomena, the values need to be bent away from the radiometrically linear ratios such that the relative emission is roughly that which our eyes would expect to perceive the relative levels as expected.

Most imaging formats bake this bent version, also known as transfer curved or tone response curved, into the data itself and the curve varies by color space. Others, with unique design constraints, may or may not depending on file tags and metadata. EXR is one such example that specifically mandates a linear format for the data within.

Finally, the issue of black and white or grayscale adds another layer of complexity. Such a representation is typically an averaged luminance obtained from the RGB triplet. Every colorspace within an RGB color encoding scheme will have uniquely colored primary lights. These colors can be mapped to an absolute color model known as XYZ. The Y axis in this non orthogonal model is relative luminance. To obtain a greyscale representation of any RGB colorspace, one multiplies the Y position of each channel by the value.

TL;DR No you cannot rely on the data values in any given format to be representations of radiometric ratios. It varies by format and within a format, by color space. Further, even if you invert the two part formula for something like an sRGB JPEG, you will only arrive at a rough display linear value set that terminates at display referred 1.0, having compressed and discarded much more of the dynamic range from something such as a camera. EXRs on the other hand will often offer scene linear representations that require no transfer curve inversions. With focus on a sensor-like response, a sensor will capture largely linearized values with nonlinear responses near the edges of the sensitivity ranges.

troy_s
  • 12,268
  • 2
  • 36
  • 75
  • so I should use openEXR, convert it to a grayscale image (how?? the rgb formulas shouldn't work) and then use the log of it? – user3688217 Jun 08 '15 at 18:04
  • It depends on the source. If the source is a display referred image (0..1) then you can glean no more information than display linear. If the image is scene referred (0..infinity) then you have a much wider latitude of information. For sRGB curved images, invert the two part transfer curve via OCIO or such. I would need more information to provide better instructions. – troy_s Jun 08 '15 at 23:26
  • I want to simulate a special kind of image sensor, that is sensitive to changes in intensity. It is a CMOS chip, that measures photocurrent on a logarithmic scale and compares that against a positive and a negative threshhold to signal "on-" and "off-events". My Idea is to render a scene in Blender and then compare the grayscale values image by image and pixel by pixel. This works pretty well but now I want to match the logarithmic scale of the CMOS chip as close as possible, so I'm looking for the most exact relationship between one of the output formats of blender and "real" light intensity – user3688217 Jun 09 '15 at 12:52
  • Sensors are not log, but mostly linear, with the aforementioned nonlinear toe and head. To convert to greyscale, a simple 3x3 matrix in OCIO will do it. http://blenderartists.org/forum/showthread.php?357273-How-to-Set-Cycles-Viewport-Render-to-Grayscale – troy_s Jun 09 '15 at 14:54
  • this one is log, so it is capable to deal with different lighting conditions, but now know what to do. Use EXR, convert to grayscale with OCIO and then apply the same log transformation that is used in the sensor.

    Thank you very much!

    – user3688217 Jun 10 '15 at 08:24
  • Whatever log transform the sensor is applying via hardware voltage will generally be well documented. Make sure to use the correct log transform that matches the footage. – troy_s Jun 10 '15 at 21:45