2

I'm writing an importer add-on. It works, but it's slow. The first sub-file takes 15 seconds to import; out of it, 10 seconds are spent in bpy.data.meshes.new().

I timed it, and of course, it's fast at first, and it keeps getting slower.

While I can understand that (the scene is growing larger), is there anything I can do about it ? Like accessing the RNA directly or whatever, like this answer suggested ?

Here's the timings for my first sub-file : the first call to bpy.data.meshes.new() take 1ms, the 3000th takes 7ms.

bpy.data.meshes.new timing

Calvin1602
  • 287
  • 1
  • 11
  • Related: https://blender.stackexchange.com/questions/7358/python-performance-with-blender-operators – CodeManX Jul 28 '14 at 06:59
  • I know, it even linked to this question already (see above). I'm just not familiar enough with the low level apis. – Calvin1602 Jul 28 '14 at 21:08

2 Answers2

1

Not a real answer, but more of a workaround : I manually merged the meshes by fiddling with the vertices arrays. So instead of creating 3000 meshes, I create only 1. Works in my use-case.

If it can help someone else, it looks like this:

lAllVertices = []
lAllTriangles = []

for fileToParse in allFilesToParse:
    lVertices, lTriangles = ParsePolygonalRep(fileToParse)

    # Before extending lAllTriangles, fiddle with the indices
    lCurrentIndexCount = len(lAllVertices)
    lTriangles = [(x+lCurrentIndexCount,y+lCurrentIndexCount,z+lCurrentIndexCount) for x,y,z in lTriangles]

    lAllVertices.extend(lVertices)
    lAllTriangles.extend(lTriangles)


lMesh = bpy.data.meshes.new(lMeshName)
lMesh.from_pydata(lAllVertices,[],lAllTriangles)
ob = bpy.data.objects.new("tmp", lMesh)
bpy.context.scene.objects.link(ob)
Calvin1602
  • 287
  • 1
  • 11
0

I'm currently working on a similar problem, and something I noticed is that the more memory Blender is currently using (tracked with task manager) the slower everything gets. One way to reduce the amount of memory Blender uses is to save and reload the file (although it doesn't reduce memory usage as much as restarting Blender).

My code looks something like this:

filename = 'path/to/blend/file'
counter = 0
stepsize = 500
for t in data_to_import:
    do stuff to import data
    counter += 1
    if counter > stepsize
        combine data and link it to scene
        bpy.ops.wm.save_mainfile(filepath=filename)
        bpy.ops.wm.open_mainfile(filepath=filename)
        counter = 0

Probably not the best solution, but it helped a bit in my case. Which value to choose for stepsize is also up for debate. My particular script takes 286 seconds for stepsize of 500, 87 seconds for stepsize = 100, but 114 seconds for stepsize = 30. I didn't check though how much of that was reloading the file etc. so it might also be faster with an SSD. And depending on your problem you might also need to consider if you need to do a lot of postprocessing.

ZeitPolizei
  • 260
  • 1
  • 10
  • Interesting! I didn't think of it but it may be the undo/redo feature... would you mind trying with a very small (but >0) memory limit ? – Calvin1602 Jul 28 '14 at 07:21