9

How to get coordinates of corners of camera border in camera view relative to the viewport areaenter image description here?

After checking

import bpy
print('-----')
print()
print()
for a in bpy.context.screen.areas:
    if a.type == 'VIEW_3D':
        for s in a.spaces:
            print(s)
            for d in dir(s):
                print(d)
print('-----')

I have no idea how to find out where I can get this

Sergey Krumas
  • 133
  • 1
  • 7
  • Rather than using that unwieldy python script, you can explore the structure using the "Datablocks" mode of the Outliner. I'm looking around to see if I can find a way to actually get the coordinates :) – linuxhackerman Jan 19 '14 at 16:21
  • 1
    Yes, I know about Outliner, but Outliner is for datablocks, and what I wanna get is not store in datablocks at all. 100 % information =) – Sergey Krumas Jan 19 '14 at 16:24
  • I'm not so sure it's possible to get it directly, you'd probably have to calculate it from the zoom and the camera's attributes... – linuxhackerman Jan 19 '14 at 16:31
  • Same thoughts, but it's too hard. Need to do a commit later. – Sergey Krumas Jan 19 '14 at 16:37
  • It's something like what I wanted. Thank you, guys.<An_Ony_Moose> CoDEmanX: bpy.data.screens['Default'].areas[2].spaces[0].region_3d.perspective_matrix and bpy.data.screens['Default'].areas[2].spaces[0].region_3d.view_matrix? Maybe? (you may have to change the 2) – Sergey Krumas Jan 19 '14 at 19:31
  • Now trying to get data from matrix with this http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ – Sergey Krumas Jan 19 '14 at 19:36
  • And this snippet from F2 Addon – Sergey Krumas Jan 19 '14 at 19:38
  • world_pos = ob.matrix_world * vert.co.copy() screen_pos = view3d_utils.location_3d_to_region_2d(region, region_3d, world_pos) – Sergey Krumas Jan 19 '14 at 19:39
  • You've lost me, but if you're getting anywhere that's good :) – linuxhackerman Jan 19 '14 at 20:23
  • It should be possible to convert the NDC to screen coord using the projection matrix and W according to some 3d grapgics web posts. I'm unsure if perspective_matrix is the same as projection matrix however, and no clue from where to take W. Looking at the C code that draws the orange camera border, it appears to be quite complex to calculate. The easiest way would be to expose a new RNA method like Camera.view_frame for the border in C, to be called from python. Dunno how to deal with the parameters yet (call on cam ob, pass scene and view3d area, without an actual border drawn on screen?) – CodeManX Jan 20 '14 at 10:10
  • 1
    Before getting the coordinates I can check area.spaces[0].region3d.view_perspective == 'CAMERA'. It works cool =) – Sergey Krumas Jan 20 '14 at 11:02
  • Could you do a commit to 2.70? No matter what - through camera or viewport data =) – Sergey Krumas Jan 20 '14 at 11:04

2 Answers2

8

There are 2 things to consider here:

  • The camera frame. The camera frame is not as simple as you might expect since its effected by the field-of-view, aspect & x/y shift.
  • The 2D view pixel coords, The user can pan & zoom the view, so this also has to be calculated.

This script gets the camera bounds and prints the pixel boundaries.

import bpy

def view3d_find(): # returns first 3d view, normally we get from context for area in bpy.context.window.screen.areas: if area.type == 'VIEW_3D': v3d = area.spaces[0] rv3d = v3d.region_3d for region in area.regions: if region.type == 'WINDOW': return region, rv3d return None, None

def view3d_camera_border(scene): obj = scene.camera cam = obj.data

frame = cam.view_frame(scene=scene)

# move from object-space into world-space 
frame = [obj.matrix_world @ v for v in frame]

# move into pixelspace
from bpy_extras.view3d_utils import location_3d_to_region_2d
region, rv3d = view3d_find()
frame_px = [location_3d_to_region_2d(region, rv3d, v) for v in frame]
return frame_px

frame_px = view3d_camera_border(bpy.context.scene) print("Camera frame:", frame_px)

See API docs for the important functions used here:

ideasman42
  • 47,387
  • 10
  • 141
  • 223
  • Thanks, that's what I actually needed! Your comment "# move into object space" confused me. Maybe the comment should be like "moving into world space"? Because view_frame function gives the positions relatively to object space of the camera, and for getting pixelspace coordinates we must provide positions of the points in world space, don't we? – Sergey Krumas Jul 05 '17 at 08:33
  • @sergey-krumas good point, done! – ideasman42 Jul 05 '17 at 22:19
  • 2.8x API changes will require a small change: frame = cam.view_frame(scene=scene) – chafouin Jul 03 '20 at 23:22
  • Thanks, updated. – ideasman42 Jul 04 '20 at 08:58
5

Workaround example

Add a plane to the scene, to see the effect:

import bpy
from mathutils import Matrix
from bpy_extras import view3d_utils

mesh = bpy.data.objects['Plane'].data
camera = bpy.data.objects['Camera']
data = camera.data

frame = data.view_frame()
render = bpy.context.scene.render
ar = render.resolution_y / render.resolution_x

mesh.vertices[0].co = frame[0]
mesh.vertices[1].co = frame[1]
mesh.vertices[2].co = frame[3]
mesh.vertices[3].co = frame[2]

scale = Matrix.Scale(ar, 4, (0.0,1.0,0.0))
mat = camera.matrix_world

mesh.transform(mat*scale)
mesh.update()

for area in bpy.context.screen.areas:
    if area.type=='VIEW_3D':
        break

space = area.spaces[0]
region = area.regions[4]

points_on_screen = [
    view3d_utils.location_3d_to_region_2d(
        region,
        space.region_3d,
        v.co
        )
    for v in mesh.vertices
    ]

print(*points_on_screen, sep="\n")

You can apply the matrix transform directly to the vectors in camera.view_frame and use location_3d_to_region_2d to get the screen coordinates.

The plane is used for visualization.

BTW: To get the W-component you have to expand the vectors before multiplication

v = Vector((0.0, 0.0, 0.0))
v.to_4d()
# Vector((0.0, 0.0, 0.0, 1.0))

space_data.region_3d.perspective_matrix seems to be already multiplied with the view matrix. You can reverse it like this:

perspective_matrix * view_matrix.inverted()

So

ndc = [None] * 4
for i, v in enumerate(camera.view_frame()):
    ndc[i] = perspective_matrix * matrix_world * scale * v.to_4d()
    ndc[i] /= ndc[i][3]

should give you the NDC-coordinates

CodeManX
  • 29,298
  • 3
  • 89
  • 128
pink vertex
  • 9,896
  • 1
  • 25
  • 44
  • Awesome pink vertex! This is exactly the calculation! I turned it into a modal draw op to visulize the result without a plane, fixed a problem with aspect ration > 1.0 and added a pink vertex in each corner =D It does not take Shift Y into account however, any idea how to solve this? http://www.pasteall.org/49117/python (Run Script, spacebar menu over 3D View, "modal", switch to camera view with Numpad 0! – CodeManX Jan 28 '14 at 22:00
  • Shift Y screws up the scaling because (0.0,0.0) is not the center anymore. You can fix this by a translation. – pink vertex Jan 29 '14 at 09:13
  • 1
    center=sum(camera.view_frame(),Vector((0.0,0.0,0.0)))/4 Create a translation matrix tmat=Matrix.Translation(Vector((0.0,-center[1],0.0)) And multiply it like this tmat.inverted()*scale*tmat You can simplify this by computing it by hand and set the entry in the scale matrix directly. – pink vertex Jan 29 '14 at 09:28
  • Great! Integrated the manual way into my script and accounted for aspect ration > 1.0 again: http://www.pasteall.org/49139/python Note that you need to be in Camera perspective already if you want to use the get points function standalone (or it won't return the correct locations). – CodeManX Jan 29 '14 at 15:50