2

I'm trying to have a list of objects that are occluded from camera view. 'bpy.context.scene.ray_cast()' does not help since it also returns occluded verts/objects. I have visited some other answers but did not find it yet. like here, and here, and here

Also I have tried with looping all vertices and check one by one, but also occluded verts count as visible ones.

Attached an example (occluded_objects.blend)

I'm expecting to see only the Cylinder occluded, but getting back wrong results:

-----------------start--------------------- At obj: 1 Sphere

all_verts: 482

selected_verts: 311

At obj: 2 Cylinder

all_verts: 64

selected_verts: 46

At obj: 3 Cube2

all_verts: 8

selected_verts: 7

At obj: 4 Cube1 all_verts: 8

selected_verts: 7

Done visible objs are: ['Sphere', 'Cylinder', 'Cube2', 'Cube1']

-----------------end---------------------

Any solution will be appreciated!

greenrod
  • 53
  • 7

1 Answers1

3

I will start with the issue in your code and present my solution. I'd recommend in the future to paste your code directly into the question, as that will make the question more approachable, without having to download your blend file.

For others, here's the code you had issues with:

import bpy
import bmesh

print("-----------------start---------------------") cam = bpy.data.objects['camera'] cam_pos = cam.location scene = bpy.context.scene visible_objs = []

loop all mesh obj

bpy.ops.object.mode_set(mode='OBJECT') bpy.ops.object.select_all(action='DESELECT') obj_cout = 0 for obj in scene.objects: if obj.type == 'MESH' and obj.name not in ['ground']: obj_cout += 1 print("At obj: ", obj_cout, " ", obj.name) # reset bpy.ops.object.mode_set(mode='OBJECT') bpy.ops.object.select_all(action='DESELECT') bpy.context.view_layer.objects.active = None # set active bpy.context.view_layer.objects.active = obj # go to edith mode bpy.ops.object.mode_set(mode='EDIT') obj = bpy.data.objects[obj.name] mesh = obj.data bm = bmesh.from_edit_mesh(mesh) selected_verts = 0 all_verts = 0 for v in bm.verts: all_verts += 1 dir = cam_pos - v.co dir.normalize() hit, loc, normal, index, ob, mat = bpy.context.scene.ray_cast(bpy.context.view_layer.depsgraph, v.co + cam_pos * 0.00000001, dir) if not hit: v.select_set(True) selected_verts += 1 else: v.select_set(False)

    if selected_verts > 0:
        visible_objs.append(obj.name)

    print("all_verts: ", all_verts)
    print("selected_verts: ", selected_verts)
    bpy.ops.object.mode_set(mode='OBJECT')

print("Done visible objs are: ", visible_objs) print("-----------------end---------------------")

The main issue that keeps your code from working correctly is that you are using the vertex coordinates in local space, but scene.raycast works off of world space coordinates. v.co is the vertex's position in local space, not world space. If you scale, rotate, and grab the object, that transformation is stored separately as world space transformation. You would need to convert the vertex's local coordinate to world space. To apply the world matrix transformation, write the coordinate as:

transformed_vertex_coordinate = obj.matrix_world @ v.co
dir = cam_pos - transformed_vertex_coordinate
hit, _, _, _, ob, _ = bpy.context.scene.ray_cast(bpy.context.view_layer.depsgraph, transformed_vertex_coordinate + cam_pos * 0.00000001, dir)

I already have code based on my add-ons nView and nView Live that I think you will find more useful and, in my opinion, more accurate. For any use case, you have to decide what determines visibility and accuracy. Vertex visibility is useful but there are cases where that may bring false negatives (say, if the vertices themselves are occluded or outside the camera frame, but their face is not). So I usually just do a low-resolution raycast from the camera. Combined with scene.ray_cast, users can check visibility independent of mesh resolution and customize their level of precision. Want more accuracy? Make the resolution higher. But you can get away with fewer rays, 25% or even 10% of screen resolution. Plus, it is much faster.

Enjoy:

import numpy as np
from mathutils import Vector
import bpy

def occlusion_test(scene, depsgraph, camera, resolution_x, resolution_y): # get vectors which define view frustum of camera top_right, _, bottom_left, top_left = camera.data.view_frame(scene=scene)

camera_quaternion = camera.matrix_world.to_quaternion()
camera_translation = camera.matrix_world.translation

# get iteration range for both the x and y axes, sampled based on the resolution
x_range = np.linspace(top_left[0], top_right[0], resolution_x)
y_range = np.linspace(top_left[1], bottom_left[1], resolution_y)

z_dir = top_left[2]

hit_data = set()

# iterate over all X/Y coordinates
for x in x_range:
    for y in y_range:
        # get current pixel vector from camera center to pixel
        pixel_vector = Vector((x, y, z_dir))
        # rotate that vector according to camera rotation
        pixel_vector.rotate(camera_quaternion)
        pixel_vector.normalized()

        is_hit, _, _, _, hit_obj, _ = scene.ray_cast(depsgraph, camera_translation, pixel_vector)

        if is_hit:
            hit_data.add(hit_obj.name)

return hit_data


context = bpy.context

sampling resolution of raytracing from the camera

usually scene objects are not pixel-sized, so you can get away with fewer pixels

res_ratio = 0.25 res_x = int(context.scene.render.resolution_x * res_ratio) res_y = int(context.scene.render.resolution_y * res_ratio)

visible_objs = occlusion_test(context.scene, context.evaluated_depsgraph_get(), context.scene.objects['Camera'], res_x, res_y) print('Visible objects:', visible_objs)

if you want the objects NOT seen, ie occluded objects

invisible_objs = {o.name for o in context.scene.objects if o.type == 'MESH' and o.name not in visible_objs} print('Invisible objects:', invisible_objs) ```

S. Magnusson
  • 1,503
  • 6
  • 16
  • 1
    Thanks for such detailed answer @S. Magnusson, local space VS world space was my missing part here. – greenrod Jul 24 '22 at 08:42