7

Lets say I have a 3d scene with some objects in it When I render the scene to an image, I would like to have an array of each pixels position of the image, each position or pixel filled with the x,y,z value of the objects surface that appears in that specific pixel in the 2d rendered image.

What would be the best way to do it ?

  • You could use a Python script to set all objects' diffusion_color property to their positions (X being red, Y green, Z blue) then rendering using the viewport shading, so that you don't have any reflections or shadows interfering with the color. – Markus von Broady Feb 17 '21 at 13:29
  • maybe use the depth buffer (Z buffer) and correlate it with the RGB values of your "beauty " pass (final rendered image). Since the depth buffer space is normalized you will have to re-calibrate it manually, but that's the simplest and fastest solution I guess. – cnisidis Feb 17 '21 at 13:41
  • I would say that this is a matter of "mapping" the values properly, another solution would be to use the screen space and correlate it with the world space, but again you would hit the same wall. So another idea would be to use known distances from your scene in order to map the values properly afterwards. – cnisidis Feb 17 '21 at 13:44
  • Can you add tag array, please? When I search under tag array in future, I might return to this thread. – Rita Geraghty Feb 17 '21 at 13:59
  • @RitaGeraghtystandsbyMonica [Array] tag redirects to [Modifiers], whereas, to my understanding, OP meant an array as a datatype in programming. – Markus von Broady Feb 17 '21 at 20:31

2 Answers2

4

You can use the "Position" output from the "Geometry" node to get the pixel world position, but the color will only contain values between 0 and 1 (units).

My solution would be to scale the values. For instance if you know for sure your objects will be constrained in a cube going from -20 to +20 in each axis, you could use this setup to "remap" the values between 0 and 1.

Each channel of the RGB value of each pixel will then represent their X, Y and Z real position. You would just have to remap like this : XYZ(real) = (RGB -0.5) * 20

enter image description here

Result with a double array of cubes

enter image description here

You can CTRL + Right Click to get the RGB values once rendered. Notice how only 2 values change when scrubbing along a face of a cube.

enter image description here

Gorgious
  • 30,723
  • 2
  • 44
  • 101
  • true looks better than my assumption, just the z buffer wouldnt be sufficient, it is needful to have also either the screen space + z buffer or the world space itself. – cnisidis Feb 17 '21 at 13:46
1

I would approach it by generating an image like so:

import bpy

temp_name = "temp.material.{}" step = 10
scene_name = "Scene"

def convert(c): # https://blender.stackexchange.com/a/158902/60486 c /= 255 if c < 0.04045: return c / 12.92 else: return ((c+0.055)/1.055)**2.4

color_generator = ( (r, g, b, 1) for r in range(0, 256, step) for g in range(0, 256, step) for b in range(0, 256, step) )

def new_material(num, color): mat = bpy.data.materials.new(name = temp_name.format(num)) mat.use_nodes = True nt = mat.node_tree rgb_node = nt.nodes.new('ShaderNodeRGB') nt.nodes.remove(nt.nodes['Principled BSDF']) nt.links.new(nt.nodes['Material Output'].inputs['Surface'], rgb_node.outputs['Color']) rgb_node.outputs['Color'].default_value = tuple(map(convert, color)) return mat

prev_mats_dic = {}

i = 0 for o in bpy.data.objects: if o.type == 'MESH': prev_mats = [] color = next(color_generator) print(o.name, color) new_mat = new_material(i, color) for mat_slot in o.material_slots: prev_mats.append(mat_slot.material) mat_slot.material = new_mat prev_mats_dic[o] = prev_mats i += 1

view_settings = bpy.data.scenes[scene_name].view_settings old_view_transform = view_settings.view_transform view_settings.view_transform = 'Standard' bpy.ops.render.render(write_still = True) view_settings.view_transform = old_view_transform

for o, mats in prev_mats_dic.items(): for i, mat_slot in enumerate(o.material_slots): temp = mat_slot.material mat_slot.material = mats[i] bpy.data.materials.remove(temp)

This script assigns temporary colors to objects and prints them to the system console together with object names; it also renders an image where the objects have those colors without shading. Now you can read this image, and divide each component of the RGB by the step (here 10) and then round it to the nearest integer (as on the rendered image there is slight +/- 2 variance). Then you multiply those components by 10 to get the flattened colors if you want to save the image, or you can just divide the colors from the console by 10, so they match. Either way you end up with information for each pixel on what object is displayed there, though I would expect some pixels to be exactly on an edge of two objects - it's probably best to render at high resolution, 1 sample (for no anti-aliasing).

Markus von Broady
  • 36,563
  • 3
  • 30
  • 99