I am pushing into the scene around 38k to 50k cube instances using Python, however, I am reasonably expecting to finish the execution of the code within ten minutes or so, I am OK with an hour at most. But it's taking 11h to finish.
Here the code where I placing the cube Instances
for l in localizations:
i = 0
buffer = []
gens = random.randint(6, 16)
probability = np.random.uniform(.1, .8, 1)
status = initial_status(gens, probability)
while np.all((status) == 0) == False:
status = make(status)
buffer.append(status)
if len(buffer) > 5:
buffer.pop(0)
for j in range(len(status[0])):
for k in range(len(status[1])):
if status[j, k] == True and len(bpy.context.scene.objects) == 1:
bpy.ops.mesh.primitive_cube_add(size=1,
location=(l[0]+j, l[1]+k, i),
scale=(1, 1, 1))
mat = bpy.data.materials.get("Material")
mat1 = bpy.data.materials.get("Material.001")
for ob in bpy.context.scene.objects:
if ob.name.startswith("Cube"):
ob.data.materials.append(mat)
if ob.name.startswith("Plane"):
ob.data.materials.append(mat1)
bpy.ops.collection.create(name="cubes")
if status[j, k] == True and len(bpy.context.scene.objects) > 1:
bpy.ops.object.collection_instance_add(collection='cubes',
location=(l[0]+j, l[1]+k, i),
scale=(1, 1, 1))
i+=1
Whether I push mesh.primitive or instance it gets slower exponentially. I couldn't understand this, to be honest.
in the function call 'make' has 2 * np.where, one multiply op. and one sum op. of two arrays. That mean, basically has no mentionable effect on computational cost. In addition all the arrays dims that two nested loop checking element by element are max 16x16, which also as small as not even mention it. One more extra question is: In the "Current file" section in Blender's right panel, there is let say 40k object in objects folder seperately, and also in scene sections; scene collection and object section has 40k element both. Totally there shows 120k but three copy of same elements. Does this make sense?
Thanks in advance, stay positive.
bpy.ops.mesh.primitive_cube_addone times if the all scene has only one object that is the plane itself, I am adding it from out of the loop. rest of the cubes are just copied version of the collection. – merkwur Mar 01 '21 at 13:57bpy.ops.object.collection_instance_addis also an operator (and is unrequired can all be done via API methods) – batFINGER Mar 01 '21 at 14:12bpy.ops.object.collection_instance_addapprox 30k to 50k times, because that much of cubes I need in the scene. rest of the code do not have any effect on the computation as I mentioned it. The make function has just one summation and one multiplication and 2 np.where, which is condition. When you run np.dot(arr(16x16), arr(16x16)) million times it takes 1 sec or so. so no effect. if you must know len(localization) is around 60, while loop continues until all the arrays elements converges to 0, and pushes the cubes at the location where arrays element is grater than 0. – merkwur Mar 01 '21 at 14:52for i in range(30000): bpy.ops.some.operator()IMO this is a confirming. Python performance with blender operators It pertains to any blender operator – batFINGER Mar 01 '21 at 14:58