38

In my script, I have for loop over many cube objects (~1000) and the treatment is very slow. Looking more in details, I notice that in the same amount of loops:

  1. if I use python operation or simple blender operation like

    obj = bpy.data.objects[obj_name]
    

    or

    obj.select = True
    

    it takes less than 0.08s (for all 1000 objects)

  2. but as soon as I start using blender operators like:

    bpy.ops.object.select_pattern() # or
    bpy.ops.object.duplicate() # or
    bpy.ops.object.location_clear() # or
    bpy.ops.object.transform_apply() # etc.
    

    then, the performance drops down deeply with more than 6s for the same amount of objects.

And, a last information, if I reduce the amount of objects from 1000 to 50, the same set of operations with bpy.ops is 0.03s - so extrapolating, it would be 0.6s for the 1000 objects and not 6s. It is as is with 1000 objects, we loose a factor 10 in speed in comparison than with 50 objects.

I tried reducing the complexity of my mesh changing the cubes in planes but it had no effect at whole on performance.

Is there something particular to know to improve these performance or a way to use the bpy.ops methods with better performances? Obviously, I am missing something important probably.

Amir
  • 3,074
  • 2
  • 29
  • 55
Salvatore
  • 757
  • 1
  • 9
  • 11

2 Answers2

56

Most operators cause implicit scene updates. It means that every object in the scene is checked and updated if necessary. If you add e.g. mesh primitives using bpy.ops.mesh.primitive_cube_add() in a loop, every iteration creates a new cube, starts a scene update and Blender iterates over all objects in the scene and updates objects if it has to.

If you start with 0 objects, there will be 1 in the first iteration and 1 object needs to be checked in the scene update. In the second iteration, there will be 2 objects and 2 be checked. The first object was checked in the first iteration already (thus, 3 object updates in total). In the third iteration, there will be 3 objects and 3 + 2 + 1 = 6 objects checked in total. In iteration 1000, there will be 1000 objects and 500,500 checks have been carried out. Here's the formula where n is the number of objects:

$\displaystyle \sum_{n=1}^n i = 1 + 2 + ... + n = \frac {n (n + 1)} {2} $

As you see, the runtime isn't linear and could only be if there was only one update for every object after all have been added. You need to use the "low-level" API - RNA methods and attributes - instead of operators to achieve better runtimes. A scene update needs to be called manually like bpy.context.scene.update() with this approach.

Many, but not all operator calls can be somehow replaced by "low-level" code. You can duplicate objects very efficiently like:

import bpy
from mathutils import Vector

ob = bpy.context.object obs = [] sce = bpy.context.scene

for i in range(-48, 48, 3): for j in range(-48, 48, 3): copy = ob.copy() copy.location += Vector((i, j, 0)) copy.data = copy.data.copy() # also duplicate mesh, remove for linked duplicate obs.append(copy)

for ob in obs: sce.objects.link(ob)

sce.update() # don't place this in either of the above loops!

A good comparison between 4 different ways to do the same thing:

  1. bpy.ops.anim.keyframe_insert_menu() - don't ever use this in a script, it is solely to show a menu for the user

  2. bpy.ops.anim.keyframe_insert() - this is supposed to be used via the UI, not in script. Use operator calls only if there is no lower-level API!

  3. Object.keyframe_insert() - RNA method that can be called on an object, better

  4. The low-level way - add F-Curves and keyframe_points manually, fastest but you need to do alot yourself and consider several conditions (like object not having animation_data or an animation_data.action)

Related (also examples included):

batFINGER
  • 84,216
  • 10
  • 108
  • 233
CodeManX
  • 29,298
  • 3
  • 89
  • 128
  • According to Blender Python API docs bpy.context.scene.objects.link runs scene.update every time it is invoked, so the last line in the code is superfluous and the whole situation is not very happy in that way. – user2683246 Oct 15 '14 at 12:55
  • The docs state "Link object to scene, run scene.update() after", so it's telling you that you are supposed to call scene.update() - did you read "runs..." by chance? Even if it did cause an update, performance is still great compared to operator calls. – CodeManX Oct 15 '14 at 15:55
  • Yes, I did read both "link..." and "run..." as "links..." and "runs..." in the docs. Am I right that the second part of the phrase is ambiguous? I'm not an English speaker, but suppose my reading to be a valid and more consistent one. Anyway I was to correct the last part of my comment. Yes, if objects are linked to a scene in a separate loop all at once, performance is really great. I've changed my script so that all the lamps are generated without bpy.ops and the execution time has reduced – user2683246 Oct 17 '14 at 07:22
  • one half from 30s to 15s. And there is more room for improvement as I still use bpy.ops for generating meshes. The other observations are also mostly consistent with what you says. Linking objects in the same loop where they are being created makes performance slightly even worse than when using bpy.ops. And scene.update() can be really omitted. Seems like when scene.objects.link(o) invocations are bunched together scene.update() runs only once at the end. – iKlsR Oct 17 '14 at 11:40
  • 3
    Have you considered adding this to the Blender Manual? – dr. Sybren Apr 30 '18 at 12:20
  • What else calls scene.update() in the background? I had some problems with the matrix_world, which was not updated when I changed object.location but did not notice since in the rendered image everything was correct, but some intermediate calculated value was wrong. I don't know where exactly the scene was updated. – McLawrence May 23 '18 at 12:44
  • Good question. I'm afraid one can only tell for sure by going through Blender's native and Python code and inspect what actually happens under the hood. – CodeManX May 23 '18 at 15:47
  • 7
    in 2.8 sce.update() becomes dg = bpy.context.evaluated_depsgraph_get() dg.update() ref and sce.objects.link(ob) becomes bpy.context.collection.objects.link(ob) ref I was quite frankly shocked at the 5sec to 0.5sec improvement. – Emile Jan 24 '20 at 13:08
4

Is there something particular to know to improve these performance or a way to use the bpy.ops methods with better performances?

The top answer is correct, calling an operator updates the current view layer (generally twice, once before and once after), which takes time proportional to the size of that view layer.

So you can make it go faster by... not... doing that.

Fair warning: I have no idea what this breaks (something, surely), so use at your own risk.

def run_ops_without_view_layer_update(func):
    from bpy.ops import _BPyOpsSubModOp
view_layer_update = _BPyOpsSubModOp._view_layer_update

def dummy_view_layer_update(context):
    pass

try:
    _BPyOpsSubModOp._view_layer_update = dummy_view_layer_update

    func()

finally:
    _BPyOpsSubModOp._view_layer_update = view_layer_update


Example usage

import bpy

def add_cubes(): for i in range(-48, 48, 3): for j in range(-48, 48, 3): bpy.ops.mesh.primitive_cube_add(location=(i, j, 0))

run_ops_without_view_layer_update(add_cubes)

For adding/importing many objects, it doesn't seem to cause any problem, and the speed difference is rather phenomenal.

  • add_cubes() : 21 s
  • run_ops_without_view_layer_update(add_cubes) : 1 s

(Everything in this answer tested with Blender 2.93.)

scurest
  • 10,349
  • 13
  • 31