I am working on a computer vision algorithm and I have a very simple scene of a rotating sphere, rotating about the z axis. I would like to know 3d point by point correspondences for every frame. I have tried a couple of way to get this but not entirely satisfactory. One is I use the speed pass to obtain 2d flow vectors and then using multiple cameras to obtain 3d flow field which would give me 3d correspondences but i think it's not accurate enough. Is there a more exact way built into blender?
Asked
Active
Viewed 93 times
1
-
1What do you want each pixel in the image to encode? – Robin Betts Oct 01 '20 at 08:11
-
I am not sure if I want it in image format because I think the flow field representation should be ideal for this if it's just image? The issue is I need to unproject the flow field using depth which will cause ambiguities. I was thinking more like I will sample a set of points of the mesh, and for every point on the mesh there's a function which will give me where they end up at between two consecutive frames. Then by subtracting the points I would get the corresponndences? – Zaw Lin Oct 01 '20 at 08:17
-
Re. Achieving the frame-delay without a second monkey, (which could just be an Empty, of course, all we need are axes) .. https://blender.stackexchange.com/a/40779/35559 is a good answer for getting previous-frame data into a shader. You would need rotations, too.. – Robin Betts Oct 01 '20 at 14:58
1 Answers
1
This answer may have missed the point, but maybe it will help clarify the question:
If you subtract the Object space coordinate of the shading point in the space of object A from the Object space coordinate of the shading point in the space of object B, then the colors on the surface-points of B will represent the World space 3D translation required to take them to the corresponding points on A.
This is OK for rigid objects.. if they're deforming, you could bake the space into an image texture, and look up in UV space instead.
Robin Betts
- 76,260
- 8
- 77
- 190
-
1Thanks I will try it out! Instead of having two different objects, how can I do the same for a single object that is animated, and do the same thing between frame 1 and 2? – Zaw Lin Oct 01 '20 at 14:25
-
The only way I can think of atm is to use 2 objects, the animation of one delayed by a frame. It can be hidden in the viewport and/or render without affecting the shader. – Robin Betts Oct 01 '20 at 14:36
-
1I was playing around with python script as well.
v = obj.data.vertices[0];co_final = obj.matrix_world * v.coThis gives me the correct vertex location after i set the frame number using api. So I can loop through all vertices and obtain 3d points and their differences frame by frame. But the problem is I want the data to be denser than all the vertices. Do you know a way to get sample points on the surface of the object? – Zaw Lin Oct 01 '20 at 14:42 -
1I like your method as well. As that means I can just operate in image pixels. If i have 3d points I would have to project them back to camera to have correspondences to pixels not to mention having to handle occlusions. – Zaw Lin Oct 01 '20 at 14:45
-
@Zaw :D That's why I'm inclined to let the GPU do it with a shader.. It's what the renderer does. However, I believe you can get at the UV parametric interpolation of the mesh through bpy. I'd have to look it up, and there are experts around who would know without having to.. a good question to ask in its own right. – Robin Betts Oct 01 '20 at 14:48
