2

I want to set an object at a certain distance (or depth) relative to the camera; and also at a particular pixel.

Example:

I set up a camera at a random position in blender. I want an 1280x720 image. So if a line came out of the camera at pixel e.g. (400,400), perpendicular to the image plane; and that line travels 1300 millimeters. It will end up at position (x,y,z) in the blender world. That is the point where I want the bottom centre of my object to be at. How do I calculate this location (x,y,z) ?

Image


Some more context here. I have a image of a scene with a table. I also have the corresponding depth image, which stores for each pixel, the distance in millimeter from the scene to the camera. I have the segmentation mask of this table. So I know for each point on the table, its distance to the camera. I want to set an object on this table, by rendering the scene as background, and rendering the object on top of it at the correct location from the camera.

Math_Max
  • 183
  • 10
  • 1
    I guess you are actually searching for a techinque on matching perspective from the photo with rendered image from camera which is done with fSpy addon as of 2.8+. See e.g https://gumroad.com/l/fSpyTute for explanation. – Mr Zak Jul 19 '20 at 14:55
  • @MrZak Thanks, but this only aligns the camera position/rotation with that of my picture. (I am not sure this is better than a random camera position with the image as background; in any case, I also have my camera parameters already.) Then I still need to figure out how to set an object a certain depth from that camera. – Math_Max Jul 19 '20 at 16:23
  • 1
    you need a camera matrix https://en.wikipedia.org/wiki/Camera_matrix in blender https://blender.stackexchange.com/questions/15102/what-is-blenders-camera-projection-matrix-model and https://blender.stackexchange.com/questions/38009/3x4-camera-matrix-from-blender-camera and https://blender.stackexchange.com/questions/108938/how-to-interpret-the-camera-world-matrix – susu Jul 20 '20 at 15:34

2 Answers2

0

I think, next must be works:

Sets empty objects to every corner of image - pinLT, pinRT, pin_BL, pinBR. Then calculate rays from camera base to those pins:

eye = camera.matrix_world @ Vector((0,0,0))
rayLT =  (pinLT.location @ pinLT.matrix_world) - eye
rayRT =  (pinRT.location @ pinRT.matrix_world) - eye
rayLB =  (pinLT.location @ pinLB.matrix_world) - eye
#rayRB =  (pinRT.location @ pinRB.matrix_world) - eye

then make interpolation to find correspond ray to the pixel

h = (rayRT - rayLT) * pixels.x / pixels.wide
v = (rayLB - rayLT) * pixels.y / pixels.height

ray = pinLT + v + h

calc result in world coorinates

resultOnImage = eye + ray resultInWorld = resultOnImage + ray.normalized() * desired_distance

Sla.Va
  • 21
  • 2
0

This is called the Inverse Projection Transformation or back-projection: 2D pixel -> 3D world coordinate. I only needed the 3D world coordinate relative to the camera. This worked for me:

  1. Set up the camera at location (0, 0, 0) and rotation (90, 0, 0) (degrees) with the correct focal length and sensor size.

  2. Get the camera intrinsics Fx, Fy (focal lengths for x and y) and Cx, Cy (optical centre)

  3. Do the back-projection. Here (Px, Py) is your pixel-coordinate; Z is the depth at that pixel-coordinate (received from the depth map I have).

x = (px - cx) * z / fx

y = (py - cy) * z / fy

Your world coordinate is then (x, y, z) relative to the camera. So with my camera setup, I can set the object at location (x, z, -y) (divided by 1000, because I used millimeter as measurement for the depth).

Math_Max
  • 183
  • 10