6

I'm new to Blender. I'm currently working on a 3D reconstruction project using neural networks. I need to create my data set. Therefore I need to render a 3D body scan from different angles and save the images separately. I can render creating cameras manually but once i create a loop for the cameras my output is black images. this is the code i used:

bpy.ops.object.camera_add(view_align=False, location=[0,10,20], rotation=[0.436,0,pi])
bpy.ops.object.camera_add(view_align=False, location=[10,20,30], rotation=[0.436,0,pi])



for ob in scene.objects:
    if ob.type == 'CAMERA':
        bpy.context.scene.camera = ob
        print('Set camera %s' % ob.name )
        file = os.path.join('/home/fotofinder/Downloads/tryrender/images', ob.name )
        bpy.context.scene.render.filepath = file
        bpy.ops.render.render( write_still=True ) 

If anyone can help to define the proper parameters for the cameras in order to always have the object in the image I would be grateful.

Robin Betts
  • 76,260
  • 8
  • 77
  • 190
wissal saihi
  • 63
  • 2
  • 4
  • Would it not be simpler to animate a single camera from viewpoint to viewpoint, (possibly animating its target, too,) and save out as a sequence of frames? – Robin Betts Jan 31 '19 at 10:16
  • 2
    Yeah you could constrain the camera to target object by position then orbit by constraint. Don't know if you can use a driver to set FOV to object bounding box – 3pointedit Jan 31 '19 at 10:28
  • 1
    As an alternative to animating a single camera, you could even set the coordinates from your Python script and just move that one camera around by setting camera.location. – dr. Sybren Jan 31 '19 at 11:43

3 Answers3

11

Could you take an approach something like this?

  • Create a mesh whose vertices are the desired viewpoint positions, around your body scan. (It's the sphere, in the illustration, named 'Viewpoints')
  • Create one camera, and assign it a 'Track To' constraint ('To': -Z, 'Up': Y) with the body scan as target

enter image description here

Run a script which keyframes the camera to each of the vertex locations in successive frames

import bpy

vp_obj = bpy.data.objects['Viewpoints']
cam_obj = bpy.data.objects['Camera']
vp_vs = vp_obj.data.vertices

bpy.context.scene.frame_start=1
bpy.context.scene.frame_end=len(vp_vs)

for (f,v) in enumerate(vp_vs,1):
    cam_obj.location = vp_obj.matrix_world * v.co
    cam_obj.keyframe_insert(data_path="location", frame=f)

.. and render the animation?

enter image description here

Robin Betts
  • 76,260
  • 8
  • 77
  • 190
  • 1
    Cool, ummed and arred whether to go this way. via a single subdivided cube for quadrants. – batFINGER Jan 31 '19 at 18:39
  • I created a uvshpere and defined it as viewpoints yet the script fails to give me results similar to yours. I used this to create the sphere: – wissal saihi Feb 06 '19 at 12:48
  • create a sphere for viewpoints

    sphere_statut = bpy.ops.mesh.primitive_uv_sphere_add(segments=32, ring_count=16, size=10, view_align=False, enter_editmode=False, location=(0, 0, 0), rotation=(0, 0, 0), layers=(True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False)) bpy.context.active_object.show_wire = True

    #define sphere as view points for obj in bpy.context.selected_objects: if obj.name == 'Sphere': vp_obj = obj

    – wissal saihi Feb 06 '19 at 12:49
  • @wissalsaihi Why are you generating the sphere by script? I was imagining you would want to create and name the viewpoint object and camera through the UI, since it's quick, and you would want to see the target's scale, the camera's field of view, etc, before shooting 500-odd frames. I can edit to include how to produce the sphere by script.. but do you actually want more than that? Are you, for instance, wanting to run all this from outside Blender altogether? – Robin Betts Feb 06 '19 at 15:40
  • @RobinBetts I'm trying to do everything outside of blender as this script is a part of my machine learning project. Creating the viewpoints and everything by script would be ideal for me. – wissal saihi Feb 07 '19 at 16:30
  • @wissalsaihi .. you mean.. you don't want to open Blender at all? Run it as part of another application? – Robin Betts Feb 07 '19 at 18:21
  • I need to run it on a terminal (ubuntu terminal) That would be ideal – wissal saihi Feb 08 '19 at 07:33
  • 2
    @wissalsaihi perfectly possible, but involves quite a few steps: orderly import of objects - possibly getting their convex hulls to optimize fov calculation - deciding whether zoom or track to fit image (different distortions, how do you want the frame filled for a good learning set?) many variables, some dependent on the nature of your data-set, not all of which you can give at once in one BSE question,( IMHO?) I think you may have to break this up into sub-tasks... – Robin Betts Feb 08 '19 at 12:14
6
import bpy
from math import *
from mathutils import *

#set your own target here
target = bpy.data.objects['Cube']
cam = bpy.data.objects['Camera']
t_loc_x = target.location.x
t_loc_y = target.location.y
cam_loc_x = cam.location.x
cam_loc_y = cam.location.y

#dist = sqrt((t_loc_x-cam_loc_x)**2+(t_loc_y-cam_loc_y)**2)
dist = (target.location.xy-cam.location.xy).length
#ugly fix to get the initial angle right
init_angle  = (1-2*bool((cam_loc_y-t_loc_y)<0))*acos((cam_loc_x-t_loc_x)/dist)-2*pi*bool((cam_loc_y-t_loc_y)<0)

num_steps = 36 #how many rotation steps
for x in range(num_steps):
    alpha = init_angle + (x+1)*2*pi/num_steps
    cam.rotation_euler[2] = pi/2+alpha
    cam.location.x = t_loc_x+cos(alpha)*dist
    cam.location.y = t_loc_y+sin(alpha)*dist
    file = os.path.join('/home/fotofinder/Downloads/tryrender/images', x)
    bpy.context.scene.render.filepath = file
    bpy.ops.render.render( write_still=True ) 

Old answer

I created a new blendfile (with the default cube) changed the render engine to cycles and changed the code as follows:

#Run with "blender -b TARGET.blend -P thisfile.py 

#Added imports 
import bpy
from mathutils import *
from math import *
import os
scene = bpy.context.scene

bpy.ops.object.camera_add(view_align=False, location=[0,10,20], rotation=[0.436,0,pi])
bpy.ops.object.camera_add(view_align=False, location=[10,20,30], rotation=[0.436,0,pi])



for ob in scene.objects:
    if ob.type == 'CAMERA':
        bpy.context.scene.camera = ob
        print('Set camera %s' % ob.name )
        file = os.path.join('/home/fotofinder/Downloads/tryrender/images', ob.name )
        bpy.context.scene.render.filepath = file
        bpy.ops.render.render( write_still=True ) 

results were as expected:

Camera.001.png: [Camera.001.png[1]] Camera.002.png: Camera.002.png Camera.png:

Camera.png

So even if it's not pretty, your script works. I would assume your background is black, and the cameras a missaligned

Some people say it is important to avoid bpy.ops commands. I would assume the best practice for creating new cameras would be

cam = bpy.data.cameras.new("Camera")
cam_ob = bpy.data.objects.new("Camera", cam)
bpy.context.collection.objects.link(cam_ob)
#To set location and rotation: 
cam_ob.location = (6,26,14)
cam_ob.rotation_euler = Euler((62.0*pi/180, 0.0*pi/180, 167*pi/180)
miceterminator
  • 3,624
  • 2
  • 20
  • 32
  • Thank you very much for your answer. My script works and gives me images for the cameras I defined manually. I need to either create a lot of cameras via a loop or rotate one camera on the object and create multiple images. I need to render a loooot of images to train my network – wissal saihi Jan 31 '19 at 13:41
  • Sorry, I focused on the "I can render creating cameras manually but once i create a loop for the cameras my output is black images". – miceterminator Jan 31 '19 at 13:46
  • No worries, I'm still struggling with getting images from different angels. now I'm actually try to make one camera rotate around the object. I don't know how will that work yet but i keep looking :D – wissal saihi Jan 31 '19 at 13:51
  • updated the answer to kind of help with the problem – miceterminator Jan 31 '19 at 16:13
  • 1
    Please note, with mathutils.Vector using the Euclidean formula is unnecessary : dist = (tloc.xy - camloc.xy).length – batFINGER Jan 31 '19 at 18:34
  • Thank you very much so far it works. can you help me make it rotate on Z axis also so i can get views from top and bottom? – wissal saihi Feb 06 '19 at 12:34
  • Essentially it is about finding the new position for the camera and aligning it to the target. What makes my code complicated. is that it takes an offset into account, and doesn't assume a lot about the camera. If you can make sure, the object of interest is at the Origin, and the camera starts on, and is aligned with the y axis, this is a lot easier. Lastly, there needs to be a path for the camera to follow i.e. how do you want to sample the spherical coordinate space. – miceterminator Feb 07 '19 at 09:08
3

Make a dolly

There are already a number of answers re rendering from different angles.

How to automatically render from several camera angles?

Here is take that sets up an empty on your object as a dolly. A camera is parented to the empty, such that when the empty has no rotation the camera is a front view.

Adjusting the empties rotaion x and z is equivalent of latitude and longitude.

enter image description here

Here is a helper script to add the empty camera setup in 2.80

import bpy
from math import radians
context = bpy.context

bpy.ops.object.empty_add(location=(0, 0, 0))
mt = context.object
mt.empty_display_type = 'ARROWS'
mt.empty_display_size = 4
bpy.ops.object.camera_add(location=(0, -1, 0))
cam = context.object
cam.rotation_euler = (radians(90), 0, radians(0))
cam.parent = mt
cam.data.type = 'ORTHO'
context.scene.camera = cam

I have made the camera ORTHO to take advantage of method outlined here

Check if the whole plane is being on a orthographic camera render (or get a proportion of the rendered plane) to scale the camera to fit mesh.

enter image description here final result of running script below, after setting up camera above, whoops notice 45 lat is south, needs a minor fix 8^). The camera is scaled such that the whole object fits

Test script. Select object to render. I have hardcoded in two latitude longitude pairs, (0, 0) and (45, 45).

import bpy
from mathutils import Vector
from math import radians
context = bpy.context
dg = context.evaluated_depsgraph_get()
scene = context.scene
# add empty
cam_ob = scene.camera
# make sure run other script first to give cam parent empty
mt = cam_ob.parent

plane = context.object
mt.parent = plane

pmw = plane.matrix_world
bbox = [Vector(b) for b in plane.bound_box]
plane_co = sum(bbox, Vector()) / 8
cam_ob.location.y = (pmw @ bbox[0]).length

coords = [t for b in plane.bound_box for t in pmw @ Vector(b)]

for lat, lon in ((0, 0),(45, 45)):
    mt.rotation_euler = (radians(lat), 0, radians(lon))
    dg.update()
    v, scale = cam_ob.camera_fit_coords(dg, coords)

    cam_ob.data.ortho_scale = scale
    cam_ob.matrix_world.translation = v
    # render

Now we only need to input the latitude and longitude of the camera.

  • Assumes the object's origin is centre of bounding box. The script sets the object as the parent of the camera "dolly" empty.

  • If you are using 2.79 or prior, replace any occurence of @ with * and context.depsgraph with context.scene

batFINGER
  • 84,216
  • 10
  • 108
  • 233
  • .. I should be learning in 2.8, not 2.79, i guess ... far be it from me.. is camera_fit_coords(context.depsgraph, coords) 2.79 OK? – Robin Betts Jan 31 '19 at 19:08
  • 1
    No, edited...., – batFINGER Jan 31 '19 at 19:19
  • I did some changes to your proposed solution but somehow I don't get the results you had. So basically I create a uvsphere and use it as viewpoints then i try to render and the images are all black. I hope you can help me with that by explaining your procedure – wissal saihi Feb 05 '19 at 08:20
  • The script above leaves the camera in last assigned position. Use to check alignment and also whether a UI render is all black. Are there lights in the scene for example? – batFINGER Feb 05 '19 at 08:27