9

I'm new to Blender and python and here is what I need:

I have a 3D model in Blender and I've set camera parameters to view a specified region of the model. The following is the rendered image:

enter image description here

I need to know the 3D coordinates of the vertices that are visible in the rendered image. Some of the points whose 3D coordinates I need are circled in RED in the figure.

I've some sources that mention using "bpy_extras.view3d_utils" function to extract the 3D coordinates from a rendered image, but I'm not able to code it in python.

Even if I manage to get the 3D coordinates of all the pixels in the rendered image, it should be a great help.

Dimali
  • 1,675
  • 1
  • 8
  • 15
Debaditya
  • 367
  • 3
  • 12
  • I would be surprised (but interested to know) what are the sources you mention about view3d_utils? To me, this image could be a 3D projection of a flat image... so there is no way to interpret it as a 3D vertices, except if you make some assumptions about the presumed geometry (for instance: what is aligned to what). – lemon Apr 12 '17 at 17:11
  • Do you have the 3D data or just the rendered image? If you don't have the geometry, then giving coordinates is impossible. And even if you do, you could just find these spots by hand in the viewport and look up their coordinates. – Dimali Apr 12 '17 at 17:37
  • @Dimali - Yes, I have the 3D data as well. Finding the spots by hand is not an option, as that has to be done for approximately 2000 such frames. Any other suggestions please? – Debaditya Apr 13 '17 at 03:30
  • @lemon - http://blender.stackexchange.com/questions/14770/how-do-i-get-a-python-reference-to-the-viewport-camera/14773

    There are few more sources, referring to the same function, If I find them I'll post as well

    – Debaditya Apr 13 '17 at 03:36
  • @Debaditya, ok, I misunderstood the subject before. Could you describe/provide all the input data you have (the 3d model, camera settings, rendered images samples, location of the points marked in red)? – lemon Apr 13 '17 at 05:53
  • @Lemon - I have the 3D model, camera intrinsic parameters, rendered image sample as well. I need to know the coordinates of the vertices of the objects that are visible in the rendered image. If not the vertices, even if I can get all the world (XYZ) coordinates of all the rendered pixels, it will be a great help. – Debaditya Apr 14 '17 at 10:50

3 Answers3

12

A partial solution, due to some approximation in ray cast (or some bug in my script?).

This solution uses 'world_to_camera_view' from bpy_extras.object_utils module. 'world_to_camera_view' returns the projection of a vertex in camera's coordinates, which means that the vertex is inside the camera view if the projected coordinates are between 0 and 1.

From that, the script is testing if the corresponding vertex is visible from the camera (and not hidden by another part of the mesh). To do that, it's using a ray cast from the camera location to the vertex.

Unfortunately, some of these ray casts fail... but this script is close to your need and that's why I provide it as an answer.

Hope that can help though.

import bpy
from mathutils import Vector
from mathutils.bvhtree import BVHTree
from bpy_extras.object_utils import world_to_camera_view

# Create a BVH tree and return bvh and vertices in world coordinates 
def BVHTreeAndVerticesInWorldFromObj( obj ):
    mWorld = obj.matrix_world
    vertsInWorld = [mWorld * v.co for v in obj.data.vertices]

    bvh = BVHTree.FromPolygons( vertsInWorld, [p.vertices for p in obj.data.polygons] )

    return bvh, vertsInWorld

# Deselect mesh polygons and vertices
def DeselectEdgesAndPolygons( obj ):
    for p in obj.data.polygons:
        p.select = False
    for e in obj.data.edges:
        e.select = False

# Get context elements: scene, camera and mesh
scene = bpy.context.scene
cam = bpy.data.objects['Camera']
obj = bpy.data.objects['Cube']

# Threshold to test if ray cast corresponds to the original vertex
limit = 0.0001

# Deselect mesh elements
DeselectEdgesAndPolygons( obj )

# In world coordinates, get a bvh tree and vertices
bvh, vertices = BVHTreeAndVerticesInWorldFromObj( obj )

print( '-------------------' )

for i, v in enumerate( vertices ):
    # Get the 2D projection of the vertex
    co2D = world_to_camera_view( scene, cam, v )

    # By default, deselect it
    obj.data.vertices[i].select = False

    # If inside the camera view
    if 0.0 <= co2D.x <= 1.0 and 0.0 <= co2D.y <= 1.0: 
        # Try a ray cast, in order to test the vertex visibility from the camera
        location, normal, index, distance = bvh.ray_cast( cam.location, (v - cam.location).normalized() )
        # If the ray hits something and if this hit is close to the vertex, we assume this is the vertex
        if location and (v - location).length < limit:
            obj.data.vertices[i].select = True

del bvh

enter image description here

lemon
  • 60,295
  • 3
  • 66
  • 136
  • Thank you so much @Lemon for finding time in scripting and giving it a thought. I really appreciate!

    On another note, I had to merge all the elements together to run the code, is that the correct way to execute it?

    – Debaditya Apr 14 '17 at 15:30
  • @Debaditya, I don't know your current code context. As you can see, the code I provided uses 2 hard coded objects 'Cube' and 'Camera'. Surely this have to be some parameters for your code. But you can easily define a function for that. Btw, I hope I was clear enough about the ray cast problem: consider it to be optional depending of what you need. – lemon Apr 14 '17 at 15:49
  • Yes, the code works and the results are expected. However, very few vertices are not selected, like you can see on the top left vertex of your example. For now this is a good start for me, thanks again for the help. – Debaditya Apr 14 '17 at 15:52
  • Is there a way to get the indexes of the edges that are connected by the selected vertices? – Debaditya Apr 20 '17 at 06:33
  • @Debaditya, yes, you can either create a lookup table (dictionary) in order to have a fast access to it, or use bmesh module (https://docs.blender.org/api/2.78b/bmesh.types.html#bmesh.types.BMVert.link_edges), but I think that in both cases, you'll have to calculate a "path" in order to find connection edges – lemon Apr 20 '17 at 06:47
  • Yes, I found a simple solution for getting the indexes of the edges. I extracted all the vertices that are visible by your method and I extracted all the edges in the mesh. Then I checked if both the vertices of each edge are present in the list of vertices. – Debaditya May 14 '17 at 17:48
  • I was just curious that is there a specific reason that you used a bvh tree for the ray_cast? Is there a difference with the normal ray_cast? – Debaditya Aug 08 '17 at 15:56
  • @Debaditya, I think using bvh tree allows to calculate an optimized structure once for all the ray casts and I guess (but I did not check the code) that object.ray_cast involves additional computations for each cast. – lemon Aug 09 '17 at 07:09
  • Does anyone know how can I modify this code to return the set of 3D points for all the mesh and not just the vertices? I don't get blender scripts much.

    Check this: https://blender.stackexchange.com/questions/190540/how-to-get-3d-points-of-a-mesh-in-world-space-coordinates

    – Radwa Khattab Aug 10 '20 at 13:10
  • I'm getting error at the multiplication here vertsInWorld = [mWorld * v.co for v in obj.data.vertices] that it can't multiply a matrix by a vector, and there is a mismatch. – Radwa Khattab Aug 10 '20 at 20:24
  • @Rou, In 2.8 Python version has changed. And also matrix/vector operator '*' (star) is changed to '@'. you'll need to port that to make it work on 2.8+ – lemon Aug 11 '20 at 17:57
  • @lemon, Okay, thanks. That worked. But I wanna ask something; in which space is the vertices by that equation? And what whould it be if I changed it to vertsInWorld = [mWorld @ v.co - cam.location for v in obj.data.vertices]? – Radwa Khattab Aug 13 '20 at 12:14
  • And what is the difference between using bm = bmesh.from_edit_mesh(obj.data) then iterating over the vertices like bm.vertices and the other way obj.data.vertices? – Radwa Khattab Aug 13 '20 at 12:37
  • @Rou, bmesh is an alternate way to access mesh data. Essentially useful to make ops on it without bpi.ops (so faster). Here, as far I remember, that should not make difference. – lemon Aug 14 '20 at 14:19
2

@lemon - it turns out a few more users are suffering from the error in the ray_cast near the contours and edges. This is resolved by adding cubes to each vertex to find a proper hit for the casted ray and increasing the threshold distance that was previously used. I've used scene.ray_cast function.

Below is a screenshot from the detection. The related post link guided me I thank IPv6 for giving me a proper suggestion. Thanks again and Cheers!

enter image description here

import bpy
from mathutils import Vector
from bpy_extras.object_utils import world_to_camera_view

Deselect mesh polygons and vertices

def DeselectEdgesAndPolygons( obj ): for p in obj.data.polygons: p.select = False for e in obj.data.edges: e.select = False

Get context elements: scene, camera and mesh

scene = bpy.context.scene cam = bpy.data.objects['Camera'] obj = bpy.data.objects['Cube']

Threshold to test if ray cast corresponds to the original vertex

limit = 0.1

Deselect mesh elements

DeselectEdgesAndPolygons( obj )

In world coordinates, get a bvh tree and vertices

mWorld = obj.matrix_world vertices = [mWorld * v.co for v in obj.data.vertices]

print( '-------------------' )

for i, v in enumerate( vertices ): # Get the 2D projection of the vertex co2D = world_to_camera_view( scene, cam, v )

bpy.ops.mesh.primitive_cube_add(location=(v))
bpy.ops.transform.resize(value=(0.01, 0.01, 0.01))

# By default, deselect it
obj.data.vertices[i].select = False

# If inside the camera view
if 0.0 &lt;= co2D.x &lt;= 1.0 and 0.0 &lt;= co2D.y &lt;= 1.0 and co2D &gt; 0: 
    # Try a ray cast, in order to test the vertex visibility from the camera
    location= scene.ray_cast( cam.location, (v - cam.location).normalized() )
    # If the ray hits something and if this hit is close to the vertex, we assume this is the vertex
    if location[0] and (v - location[1]).length &lt; limit:
        obj.data.vertices[i].select = True

UPDATE: The script modified for the new version of Blender is here: https://blender.stackexchange.com/a/87774/113612

Debaditya
  • 367
  • 3
  • 12
  • I had the same problem with the raycast test, and my solution was to cast backwards, from the vertex in question back toward the camera. The raycast test seems more reliable that way, as there isn't a nearby object or surface to confuse the test. – zippy Apr 07 '19 at 05:12
  • @zippy Thank you for the suggestion. I'll highly appreciate if you can share the code for doing that as a separate answer! Thanks in advance. – Debaditya Apr 13 '19 at 02:37
  • 1
    @zippy I just realised you shared the code for that here: https://blender.stackexchange.com/questions/87754/ray-cast-function-not-able-to-select-all-the-vertices-in-camera-view – Debaditya Apr 13 '19 at 02:41
  • yes, similar concept over there. Sorry I'm not a python guy so I can't provide an actual code sample. – zippy Apr 13 '19 at 23:26
0

Select the mesh and activate edit mode with tab. Then make sure you are in vertex editing mode (instead of edge or face mode) and select one of the vertices.

If you do not have the 3D view's right panel active press the n key to make it visible. The coordinates of the selected vertex (or median of multiple) should be visible at the top of the n-panel. There is even a radio button beneath it to toggle between the global coordinate system and the object's local coordinate system (which can vary if there is any translation, rotation, or scaling on the object).

Mutant Bob
  • 9,243
  • 2
  • 29
  • 55
  • Thank you for your reply. Yes, we can know the coordinates of the vertex by the way you mentioned. But, I need to know the coordinates of all the vertices of the edges in an automatic way, as I need to do the same for more than 2000 images. – Debaditya Apr 13 '17 at 04:57
  • I can guarantee you that this will be terribly complicated. First of all, there is no algorithm to find these spots in the image so easily, as there is no clear pattern of your marked spots. There are many corners you did not mark and finding only the right ones (by some rules you need to define), automated, would be at least a few houndred lines of code and lots of math. Second, recalculating to the 3D coordinates is not as easy, as you will need either Z-depth and camera matrix or find the closest intersection of the pixel ray by hand. And third, in what form do you need the coordinates? – Dimali Apr 13 '17 at 12:42
  • @Dimali - I need the world coordinates (XYZ) of all the vertices of the objects that are visible in the rendered image. Yes, I've missed marking all the vertices that are visible in the scene. – Debaditya Apr 14 '17 at 10:52