I wrote a simple python script to render a scene from different points of view. For each rendered image i also need to save the projection matrix of the camera that took the picture.
I try to get the projection matrix with calc_matrix_camera(), but it seems that even if i change the location/orientation of the camera the resulting matrix is always the same. Is the matrix returned by calc_matrix_camera an intrinsic matrix? Is it premultiplied with the extrinsic matrix? Can anyone point me to the right direction?
I am running the script via command line.
Here's the code i use to get the projection matrix:
for step in range(0, steps):
radians = np.radians(step * (angle / steps))
subject.location = (250 * np.cos(radians), 250 * np.sin(radians) , 150)
look_at(subject, (0,0,100))
dg = bpy.context.evaluated_depsgraph_get()
dg.update()
projection_mat = bpy.context.scene.camera.calc_matrix_camera(
dg, #bpy.context.view_layer.depsgraph,#
x=bpy.context.scene.render.resolution_x,
y=bpy.context.scene.render.resolution_y,
scale_x=bpy.context.scene.render.pixel_aspect_x,
scale_y=bpy.context.scene.render.pixel_aspect_y
)
bpy.data.scenes["Scene"].view_layers["View Layer"].depsgraphdoes not solve the problem for me. – 0n430w7 Feb 08 '23 at 11:16