0

one point cloud two point clouds together

Similar to this question, I'm projecting two views (RGBD) back to 3D point cloud, but observed severe misalignment.

The two views are generated using the same blender script. The blender file is here.

What I did was using the camera intrinsics and extrinsics obtained from rendering to project the two views back to 3D points (in world coordinate).

I converted the exported .exr depth to .npy, but I can confirm that it's identical to loading it directly using cv2.

Here's the script for projection:


def convert(c2w):
    # convert a c2w matrix in blender format to extrinsics matrix in opencv format
    # return np.linalg.inv(c2w)
    R, T = c2w[:3, :3], c2w[:3, 3:]
    ww = np.array([[1, 0, 0],
         [0, -1, 0],
         [0, 0, -1]])
        #  [0, 0, 0, 1]])
    R_ = R.T
    T_ = -1 * R_ @ T
    R_ = ww @ R_
    T_ = ww @ T_
    print(R_.shape, T_.shape)
    new = np.concatenate((R_, T_), axis=1)
    # new = torch.inverse(torch.from_numpy(ww @ c2w).float())
    new = np.concatenate((new, np.array([[0, 0, 0, 1]])), axis=0)
    return new

with open("./results_400/transforms.json", 'r') as f: meta2 = json.load(f) ref_frame = meta2["frames"][4]

ref_frame = meta2["frames"][59]

ref_c2w = np.array(ref_frame['transform_matrix']) ref_c2w = convert(ref_c2w) ref_c2w = torch.FloatTensor(ref_c2w)

ref_c2w is converted to be exprinsics matrix

image_path2 = os.path.join(root_dir, f"{ref_frame['file_path']}.png") print(image_path2) ref_img = Image.open(image_path2).resize([400, 400], Image.LANCZOS).convert('RGB')

plt.imshow(img)

plt.imshow(src_img)

focal = 0.5400/np.tan(0.5meta['camera_angle_x']) K = np.array([[focal, 0, (400 - 1) / 2], [0, focal, (400 - 1) / 2], [0, 0, 1]]) K = torch.from_numpy(K).float()

from PIL import Image new_depth = torch.from_numpy(np.load('./blender/r_4_400.npy')).float() new_depth[new_depth > 1000] = 0 new_depth = new_depth[:, :, 0]

y_ref,x_ref=torch.meshgrid([torch.arange(0, 400), torch.arange(0, 400)]) y_ref, x_ref = y_ref.reshape(400 * 400), x_ref.reshape(400 * 400)

xyz_ref = torch.stack((x_ref, y_ref, torch.ones_like(x_ref))).unsqueeze(0) * (new_depth.view(1,-1).unsqueeze(1))

xyz_ref = torch.matmul(torch.inverse(K), torch.stack((x_ref, y_ref, torch.ones_like(x_ref))).unsqueeze(0) * (new_depth.view(1,-1).unsqueeze(1)))

points = torch.matmul((torch.inverse(ref_c2w)), torch.cat((xyz_ref, torch.ones_like(x_ref.unsqueeze(0)).repeat(1,1,1)),dim=1))[:,:3,:] save_ply(points.reshape(3, -1).permute(1, 0).numpy(), torch.from_numpy(np.array(ref_img)).float().permute((2, 0, 1)).view(3, -1).permute(1, 0), 'points.ply')

SCaffrey
  • 51
  • 3

1 Answers1

0

enter image description here Proof it works: https://i.stack.imgur.com/dbYak.jpg

As said in the previous comment, just follow my answer to this question Depth Pass to Point-cloud (with AN)

I forgot to mention in that answer, that you have to add an empty at 0,0,0 and then use the draw method of the PointCloudVisualizer. That is because we calculated the Blender world-coordinates from the depthimage + the camera coordinates.

Here are the .ply files, so you could verify it: files

I used the following packages for reference (of course there are 50+ other packages that pip will throw in there as well):

  • blender-2.83.20-candidate+daily.92d3a152391a
  • opencv-contrib-python==4.5.1.48
  • numpy==1.19.2
  • open3d==0.15.1
WhatAMesh
  • 1,169
  • 1
  • 10
  • 25
  • Thank you for your suggestion, honestly I think our codes are doing the same thing. Let me give it a try. – SCaffrey Feb 26 '22 at 00:00
  • https://imgtu.com/i/berI4U I ran your script and found that the mesh don't align. I'm not sure if your cam.matrix_world is correct. It should be 4x4 matrix, but the points in 3d is 3d, so cam_mat @ Vector(p) errors. – SCaffrey Feb 26 '22 at 22:12
  • @SCaffrey I edited it to an answer. I hope that it will now also work for you. – WhatAMesh Mar 08 '22 at 01:16