2

I'm rendering a cube with a single light-source with the standard white/grey diffuse-shader and therefore getting a picture with values-only and then multiplying a color on the image, the color on the different surfaces just changes its brightness.

But when I'm giving the diffuse material the color I have multiplied on the first render, in the new image the color just doesn't change its brightness, but also its saturation.

Why? Is there just a simple difference I didn't recognize? 1st picture: Values multiplied with color. 2nd picture: Used the color as material for the diffuse-shader.

Value-only-image, multiplied with color.

Cycles-render, used the color from above as material color.

EDIT: I found out that monochromatic colors dont change their saturarion.

10 Replies
  • 2,231
  • 4
  • 20
  • 47
Xernist
  • 51
  • 1
  • 8

4 Answers4

4

The answer for this issue is "albedo", a very important aspect to keep in mind when creating shaders.

Albedo is the reflectance of a material, which in layman terms means how much of the light received is bounced (it's a percentage) and therefore how much is absorbed.

Real world materials don't reflect all the light received nor absorb all the light received (materials you'd call "black" in real world always reflect a little light, while the brightest material you'd call "white" bounces less than a 90% of the light received).

That's the main reason why the default cube is 0.8 RGB gray and not 1.0: albedo is somewhat encoded in the RGB values you feed to shaders as base colour, so if you gave your material a 1,1,1 as base colour, it would bounce back 100% of the light received, which is physically incorrect.

So when you render your materials a part of the light is being absorbed, and the resulting pixel is the light hitting the surface scaled by that albedo (present in the base colour).

I suspect you're trying to reproduce what passes do (multiplying shading by base colour as we discussed in your other question), but it can't work that way because you're multiplying the base colour to the already rendered surface. Your rendered gray cube has an albedo of 0.8, so it reflected an 80% of the light it received. When you render the brown cube, colour affects the albedo too, and the bounced light is different than 80%, and that's the reason why you get a difference when you multiply.

In other words, your rendered gray cube isn't exactly the same as a pure shading pass, so multiplying colour to produce the same result than rendering a colored cube won't work.

Gez
  • 2,220
  • 18
  • 27
  • BTW, Please post the hex code for your base orange/brown colour. It's easy to tell it's albedo by looking at the maximum RGB value. I suspect that the red channel is a bit above 0.8, right? – Gez Dec 01 '16 at 15:36
  • Color: c3673e , the red channel is actually at around 0.76. – Xernist Dec 01 '16 at 16:33
  • Where are you taking that 0.76 reading? – Gez Dec 01 '16 at 17:08
  • I used the color as a texture from a plain colored canvas in photoshop ,where it says R 195 / 255 -> 195/255 = ~0,764 . – Xernist Dec 01 '16 at 17:56
  • Keep in mind that in Photoshop you're working non-linearly (most likely sRGB). Those values are gamma corrected values. The albedo is a linearized value (Blender linearizes sRGB colours entered in the colour selector) – Gez Dec 03 '16 at 03:49
  • What do you mean by "linearizes sRGB colors" or non-linear in this case? – Xernist Dec 03 '16 at 10:01
  • You're going deep into the rabbit hole :-) The scene you render has linear light ratios, but human perception is not linear, so images have to go through a transform that takes them from linear to a more perceptually uniform tone distribution. Does "gamma-corrected" ring any bell? So, when you save an sRGB image from your blender renders, that image is "gamma corrected" (being gamma a non-linear curve applied to your image to make it look ok in your display). – Gez Dec 04 '16 at 23:58
  • In order to use it in Blender or in any program that tries to mimic how light works in the physical world, you need to revert that correction, and make it linear again. That's what Blender does when you import any sRGB image to use it as texture in a shader. So, when you read the RGB value of the same pixel in Photoshop (sRGB, non-linear) and in Blender (linear) you get different values. The numbers that matter to us in terms of shaders and albedo are the linearized ones. As for non-emissive shaders, with values from 0 to 1, translated as 0-100% albedo. – Gez Dec 05 '16 at 00:01
  • So in other words, a linear value-gradient would not be seen as a consistent increase of intensity and therefore the higher values get an exponentiation, so it feels to us like theres a "nomal" gradient? And so does this (also) explain, why I get a different image in PS then in blender? (Like the change in saturation as mentioned before? - And yea, its driving me a bit crazy. :P) – Xernist Dec 07 '16 at 17:44
  • It's not exactly that but you're getting closer. The differences you get in Photoshop while compositing are mostly because in PS you're compositing layers that are already gamma-corrected, while in Blender you're copositing linear images that get gamma-corrected on the tail, when everything is already composited. Linear compositing works closer to how things work in reality, compositing non-linear images is legacy and should be avoided as much as possible. It doesn't help that PS, the de-facto standard bitmap manipulation program works that way. – Gez Dec 07 '16 at 19:47
  • It's yet another layer of understanding colour. You might find this chat room interesting if you want to go deeper: https://chat.stackexchange.com/rooms/34814/the-rabbit-hole – Gez Dec 07 '16 at 19:49
1

To summarize things: Its all about the transformation of the value informations given by the rendered scene to the final sRGB display view.

My central question was: Why the color of surfaces desaturates in areas with higher light intensity?

And here is the thing: When transforming the scene-data to sRGB image-data, blenders ignores the fact that natural media (like films in analog cameras or the rods in our eyes) are sensitive to not only a small bandwidth of red/green/blue but more to a whole range of different wavelengths (to some degree). That means if , for example, a "red-channel-sensor/layer" is stimulated strong enough the although weak stimulation for blue/green will get so strong that the combined information will create an impression of white. In opposition to this in blender values for red will only influence the red channel, etc. - Therefore red surfaces will be displayed red, even if the light-intensity raises towards infinity.

For further information please visit this very helpful thread: Render with a wider dynamic range in cycles to produce photorealistic looking images

Xernist
  • 51
  • 1
  • 8
  • Link only answers are discouraged as if the link goes down then so does the answer (yes, this includes Stack Exchange links). Please include at least some of the steps that solve your question in the answer body itself. – Ray Mairlot Jan 30 '17 at 09:25
1

How are you multiplying the color with the grayscale image? Odds are that you're not multiplying in linear color space but in sRGB space. Cycles in doing all of its calculations in linear space and only after a pixel is rendered is it converted to sRGB for display.

See also http://filmicgames.com/archives/299

  • Yea, thats also another thing i didn't know in the beginning that in fact, i tried to multiply color on an already gamma-corrected image. – Xernist Jan 30 '17 at 13:04
-1

Sometimes when you get unexpected color variations it can be due to color management. It might be worth looking at the color management settings in the Scene tab.

Also, how are you "multiplying by a color" and how are you looking at the resulting image? For example, if you're saving an image from Blender and looking at it using an image viewer, you might get different visual colors than when you look at the image inside Blender, as there may be different color management settings between Blender and the image viewer. This is especially likely to produce unexpected colors if you have a monitor that's not accurately compliant to a standard colorspace such as sRGB, because if Blender or your image viewer aren't compensating for the monitor (by using a profile file), then the colors will be inaccurate.

And if you're taking the grayscale image from Blender and multiplying by a hue in, say, the GIMP, it's quite likely that the math will be different to Cycles. :) After all, it would be pixel math, rather than lighting-driven.

Dan Bennett
  • 807
  • 9
  • 20
  • Its not that the colors look different in other viewers, its more the question for me why also the saturation seems to change in darker areas, i mean in the end its moslty a light-intensity/brightness change, but the r,g,b values-ration/ the saturation should stay the same. (Given that ones using a white lightsource and a black scene to avoid global illumination,etc - like in my examle.) Sorry for maybe confusing somebody. – Xernist Dec 01 '16 at 07:43
  • 2
    Colour management has nothing to do with this. The problem here is that the OP is trying to reproduce a shading*colour result (like when you multiply the colour pass to the shading passes) using as source an image that is not a pure shading pass but a rendered composite. See my answer for more details. – Gez Dec 01 '16 at 15:29