Workaround example
Add a plane to the scene, to see the effect:
import bpy
from mathutils import Matrix
from bpy_extras import view3d_utils
mesh = bpy.data.objects['Plane'].data
camera = bpy.data.objects['Camera']
data = camera.data
frame = data.view_frame()
render = bpy.context.scene.render
ar = render.resolution_y / render.resolution_x
mesh.vertices[0].co = frame[0]
mesh.vertices[1].co = frame[1]
mesh.vertices[2].co = frame[3]
mesh.vertices[3].co = frame[2]
scale = Matrix.Scale(ar, 4, (0.0,1.0,0.0))
mat = camera.matrix_world
mesh.transform(mat*scale)
mesh.update()
for area in bpy.context.screen.areas:
if area.type=='VIEW_3D':
break
space = area.spaces[0]
region = area.regions[4]
points_on_screen = [
view3d_utils.location_3d_to_region_2d(
region,
space.region_3d,
v.co
)
for v in mesh.vertices
]
print(*points_on_screen, sep="\n")
You can apply the matrix transform directly to the vectors in camera.view_frame and use location_3d_to_region_2d to get the screen coordinates.
The plane is used for visualization.
BTW: To get the W-component you have to expand the vectors before multiplication
v = Vector((0.0, 0.0, 0.0))
v.to_4d()
# Vector((0.0, 0.0, 0.0, 1.0))
space_data.region_3d.perspective_matrix seems to be already multiplied with the view matrix. You can reverse it like this:
perspective_matrix * view_matrix.inverted()
So
ndc = [None] * 4
for i, v in enumerate(camera.view_frame()):
ndc[i] = perspective_matrix * matrix_world * scale * v.to_4d()
ndc[i] /= ndc[i][3]
should give you the NDC-coordinates
perspective_matrixis the same as projection matrix however, and no clue from where to take W. Looking at the C code that draws the orange camera border, it appears to be quite complex to calculate. The easiest way would be to expose a new RNA method likeCamera.view_framefor the border in C, to be called from python. Dunno how to deal with the parameters yet (call on cam ob, pass scene and view3d area, without an actual border drawn on screen?) – CodeManX Jan 20 '14 at 10:10