I want to scatter multiple objects inside the view frustum of the active camera. In theory it is easy:
- Generate a random 3D Vector with values in [0..1]
- Read the projection and camera matrices of the camera
- Invert them and multiply them with the vector from step 1
- Set the objects position to the result
But in praxis this just seems impossible. Reading the transform matrices is a challenge in itself, but with some adjustments an answer from an older question regarding the projection matrix seems to work.
camera_to_world = camera.matrix_world
view_to_camera = camera.calc_matrix_camera(
bpy.context.evaluated_depsgraph_get(),
x = bpy.context.scene.render.resolution_x,
y = bpy.context.scene.render.resolution_y,
scale_x = bpy.context.scene.render.pixel_aspect_x,
scale_y = bpy.context.scene.render.pixel_aspect_y)
Transforming the point should be straightforward, assuming the order is correct. The "camera_to_world" we read is already the right direction, saving us one inverse.
point_camera = np.append(point_view,1) @ projection_matrix.inverted()
point_world = point_camera @ modelview_matrix
obj.location = point_world
However, the end result does not look pretty:
What am i missing here? There are 31 other ways to combine the matrices, vectors and inverse, but i don't think that the issue lies there.