5

I'm testing code from this question:

``

import bpy

width = 800
height = 400

image = bpy.data.images.new("testimagepacked", width=width, height=height,float_buffer=True)

pixels = [None] * width * height

for x in range(0,width):
    for y in range(0,height):
        pixels[(y*width)+x] = [float(x)/width,float(y)/height,-y+x,1.0]

pixels = [chan for px in pixels for chan in px]
image.pixels = pixels
image.update()

``

I noticed the code above generates different colors depending on the float_buffer being True or False: enter image description here

enter image description here

From what I understand - pixel (RGBA) values for 8bit are in range from [0..1], so:

1) How is 32bpc different in that regard?

2) Is there a way to create 8 and 32bpc images, that would result in identical (visually) colors from the same code (with different RGB values maybe)?

3) Also - if I had an 8bit image - how could I transfer pixel values to a 32bit image and keep it visually consistent?

EDIT: Investigating the issue, references:

How blender load images

python api reference

linear to srgb formula

''

import bpy


width = 100
height = 50

image = bpy.data.images.new("float buffer true", width=width, height=height,float_buffer=True)
pixels = [None] * width * height

for x in range(0,width):
    for y in range(0,height):
        pixels[(y*width)+x] = [float(x)/width,float(y)/height,-y+x,1.0]

pixels = [chan for px in pixels for chan in px]
image.pixels = pixels
image.update()




image = bpy.data.images.new("float buffer false", width=width, height=height,float_buffer=False)
pixels = [None] * width * height

for x in range(0,width):
    for y in range(0,height):
        pixels[(y*width)+x] = [float(x)/width,float(y)/height,-y+x,1.0]

pixels = [chan for px in pixels for chan in px]
image.pixels = pixels
image.update()





def convert_to_srgb(val):
    if (val <= 0.0031308):
        return (val * 12.92)
    else:
        return (1.055*(val**(1.0/2.4))-0.055)



image = bpy.data.images.new("float buffer false converted", width=width, height=height,float_buffer=False)
pixels = [None] * width * height

for x in range(0,width):
    for y in range(0,height):
        pixels[(y*width)+x] = [ convert_to_srgb(float(x)/width),
                                convert_to_srgb(float(y)/height),
                                convert_to_srgb(-y+x),
                                1.0
                               ]

pixels = [chan for px in pixels for chan in px]
image.pixels = pixels
image.update()

''

This code result:

all images

And color comparison on one point:

img1

img2

img3

So my question now (since the original question found it's answer): is this behavior expected? Am I missing a point here? I mean: if I write a given color value directly to pixel... the result should be the same inspite the fact that image has floating_point buffer True or False? or not? Especially that Blender states it does all color data manipulation in linear values (it still might be true - only 8bit images values seem to be converted on the fly somewhere, where it isn't exposed to the python api?).

Why is this confusing for me? Because I'd understand why 8bpc might store raw pixel data differently from EXR's, like troy_s mentioned (however I'd assume blender should store all image data in a standarized way)... but actually loading for example 16bpc PNG converts it to float_buffer and stores data like 32bpc (EXR) image. That seems very odd and unconsistent to me. I would expect that all images are loaded to blender to fit the float32 buffer, converting (sRGB->linear) all 8bit images to this format as well. Instead what we have here is two different data formats, one (for 8bit) holds sRGB raw values, the other (for anything above) linear values. Confusing.

EDIT:

Digging through Blender source code. So there are two formats to hold image buffer values ('byte' when referring to 8bpc and 'float' for 32bpc), would be great if someone who actually know the design could answer how does that work.

Here PNGs are loaded to different image buffer type, depending on the PNG bpc.

Ok, my assumptions were correct IMB_imbuf_types.h:

``

/* pixels */

    /** Image pixel buffer (8bit representation):
     * - color space defaults to `sRGB`.
     * - alpha defaults to 'straight'.
     */
    unsigned int *rect;
    /** Image pixel buffer (float representation):
     * - color space defaults to 'linear' (`rec709`).
     * - alpha defaults to 'premul'.
     * \note May need gamma correction to `sRGB` when generating 8bit representations.
     * \note Formats that support higher more than 8 but channels load as floats.
     */

``

The question still remains though: why? Wouldn't it be better to convert all images to linear values and single buffer type?

susu
  • 14,002
  • 3
  • 25
  • 48
kilbee
  • 1,327
  • 12
  • 22
  • If I understand your question, it amounts to mixing apples and oranges. Float 32 bit is scene referred in the context of Blender and many other compositing applications, while 8 bit is display referred. Entirely different fish. – troy_s Mar 09 '17 at 11:03
  • Well, actually, it seems it's not that simple: https://docs.blender.org/manual/en/dev/data_system/files/media/image_formats.html#channel-depth

    Internally Blender’s image system supports either:

    8 bit per channel (4 x 8 bits). 32 bit float per channel (4 x 32 bits) - using 4x as much memory.

    – kilbee Mar 09 '17 at 11:29
  • This is exactly what float_buffer parameter does to an image. And from what I discovered... it seems it's like that 8bpc image holds pixel information in sRGB values, while 32bpc holds pixel values in linear values. I know it seems weird (it is, at least for me) but it looks like this is the case: i can convert pixel values from float_buffer=True, to float_buffer=False with thisformula: http://excamera.com/sphinx/article-srgb.html and then output images look very similar visually. – kilbee Mar 09 '17 at 11:33
  • Google Scene Referred versus display referred. Two totally, utterly, entirely different types of models. – troy_s Mar 09 '17 at 11:35
  • I underestand what's difference between display and the actual color spaces. The issue here is that blender can load images in two ways: https://docs.blender.org/api/current/bpy.types.Image.html?highlight=use_generated_float#bpy.types.Image.use_generated_float which seem to store color pixel information in two ways. I understand it shouldn't have any connection to sRGB to linear conversion - hence my confusion. – kilbee Mar 09 '17 at 11:41
  • Display referred imagery is encoded nonlinearly, subject to a custom nonlinear transform. EXRs are encoded scene referred linearly by convention, and typically require no such nonlinear decoding. – troy_s Mar 09 '17 at 11:48
  • So... this would mean (following the first link do blender docs) that any 8bpc image loaded into blender would store its raw pixel information in sRGB values, and anything above that 8bpc (including for example 16bpc PNG's) would convert to linear 32bpc raw pixel data... (If that doesn't seem to sound right - it actually is true - tested it with that sRGB-linear conversion above). Assuming this is true - would you say this is proper way of handing 8bpc image information? – kilbee Mar 09 '17 at 11:56
  • Edited the question with my tests. I wonder if this is correct way of handling things, or maybe it's some sort of remain from old design. – kilbee Mar 09 '17 at 12:39
  • @kilbee I've noticed that when creating an image Blender is choosing a different Input Color Space at creation depending on whether it's set to 32-bit float or not. This can be seen in the Outliner set to Datablocks and opening up the image and Color Space Settings. When creating a non-32-bit-float image it's using sRGB and for 32-bit-float it's Linear. Possibly this is causing the difference. Also, bear in mind that the Blue channel in the examples is varying well outside the normal range (ie, in the hundreds - written that way to prove it was actually packing as float) – Rich Sedman Mar 09 '17 at 15:43
  • Yep, thats why blue is clipped to 1 in all tests, but thats just this example so doesn't really affect the issue. So any idea why does blender handle 8bpc images like this? Old design compatibility issue? Or some other matter, I'm not aware of? – kilbee Mar 09 '17 at 16:39
  • Egads. There is no clipping. It is all based on the view control, which by default is the sRGB EOTF. If the reference were different primaries, we would need to convert those as well. Again, Google scene referred versus display referred and the reasons and background on what is happening should become clear. The data values in float are scene referred, zero to infinity (32 max float currently of course) while the values in a display referred output have a transfer curve of some sort. Default is the sRGB EOTF, but it can be other things (see Filmic.) – troy_s Mar 09 '17 at 19:43
  • I would guess (it is just a guess) that it's a matter of making a reasonable assumption from the data source. If it's 8-bit from a PNG it's very likely to represent color data and so assuming sRGB would be a reasonable choice. If it's 32-bit float then it's much more likely to be data or non-encoded intensities so linear would be the best choice. Enabling 'float buffer' implies that you'll be doing manipulation on it or that it's more than just color data so it's a fair assumption to assume linear. You can use the Image File Color Space of the Image Texture node to indicate appropriate content – Rich Sedman Mar 09 '17 at 20:11
  • troy_s, i understand basics of color spaces and curve transformations - how do you think i even thought about testing the different results i had with the same pixel data on different image buffers with linear->srgb transformation? could you read this line (from the link i provided above) and explain this to me: "unsigned int rect; / pixel values stored here */"? – kilbee Mar 09 '17 at 20:54
  • Rich, as i wrote in main post - i would understand this reasoning (though, not fan of it), but how would you categorize 16b PNG then? – kilbee Mar 09 '17 at 20:56
  • also run this code http://pastebin.com/MDfZJW4M and right click on both images and read values. how to explain this if not that 8bit int image buffer is literally what blender uses for this float_buffer=False image buffer? – kilbee Mar 09 '17 at 21:33
  • well, this is what i found about 16b PNGs https://developer.blender.org/rBSd7f55cff20ff31e80d95c7837e9b45205dea273f also here campbell stating byte buffer (8bpc) is always non linear https://developer.blender.org/T27997#123404 this is the part i dont understand: why is it better to store it non linear. – kilbee Mar 09 '17 at 22:14
  • @kilbee Because you cannot quantise linear correctly at 8 BPC. I believe both 16 bit TIFF and the abortion of a format of 16 bit PNG are both display referred as well, at higher depth. – troy_s Mar 11 '17 at 22:40
  • This exactly is source of my confusion - I'd expect that, but it actually seems that anything above 8bpc is loaded into float buffer (since 2012, see my previous comment and here: https://wiki.blender.org/index.php/User:Nazg-gul/Foundation/2012#Week_54:10th-_16th_September ), which is always linear. Now that I know all that I can work with it and get predictable results, but it is still unclear to me how those image buffers are used when byte (8bpc) and float (32bpc) image buffers are used together when rendering etc. It is however out of scope of my current work, so this is pure curiosity. – kilbee Mar 12 '17 at 12:27
  • @kilbee Depends on context. In the compositor, all values are promoted to 32 bit. In the VSE, 8 bit on operations. It is a bit of a mess. Be careful though, RGB is relative and hard coding transfer functions etc. is the dead wrong way to accomplish things. – troy_s Mar 13 '17 at 01:03
  • Yeah, I've been thinking about that and decided to go along with current limitations, hoping for a change in future maybe. See - if that's true about compositor related conversion - this is exactly what I meant by "on the fly conversion" and that's why I find 8bpc buffers unexpected (point about memory savings is no more valid in this situation and only adds an overhead with conversion). I would expect this to change in future, unless of course I'm missing a point here (though I've been asking around and haven't got any reasonable points). – kilbee Mar 13 '17 at 10:45

1 Answers1

1

8 bit textures are not precise enough to store linear colors. If you try to store linear color space in an 8bit texture you get banding artifacts. 8Bit textures are good for storing sRGB images as that is a perceptive linear color space.

A perceptive linear color spaces is how your eyes/mind perceive colors and therefore you need less precision. Normally linear color spaces are used in render engines as it is how nature/physics work with colors and light.

J. Bakker
  • 3,051
  • 12
  • 30