1

I want to duplicate an existing animation and bake it with visual transforms applied, something like what the default nla baker does. For simplicity's sake say I need to do this for a single bone. This is how I tried to do it.

sourceAction = bpy.data.actions["myAction"]
action = bpy.data.actions.new("newAction")
fcurves = action.fcurves

poseBone = object.pose.bones["bone"]
bone = object.data.bones["bone"]

curveLocX = fcurves.new('pose.bones["bone"].location', 0, "bone")
curveLocY = fcurves.new('pose.bones["bone"].location', 1, "bone")
curveLocZ = fcurves.new('pose.bones["bone"].location', 2, "bone")
curveRotX = fcurves.new('pose.bones["bone"].rotation_euler', 0, "bone")
curveRotY = fcurves.new('pose.bones["bone"].rotation_euler', 1, "bone")
curveRotZ = fcurves.new('pose.bones["bone"].rotation_euler', 2, "bone")

#iterate through each frame somehow
    #get the matrix for current frame
    matrix = bone.matrix_local.inverted()*poseBone.matrix
    loc = matrix.translation
    rot = matrix.to_euler()

    curveLocX.keyframe_points.insert(frame, loc.x)
    curveLocY.keyframe_points.insert(frame, loc.y)
    curveLocZ.keyframe_points.insert(frame, loc.z)
    curveRotX.keyframe_points.insert(frame, rot.x)
    curveRotY.keyframe_points.insert(frame, rot.y)
    curveRotZ.keyframe_points.insert(frame, rot.z)

I'm not sure how to iterate through the frames to get the correct matrix for that frame. Also I'm not sure if there's a better way to get the visual transform channels.

1 Answers1

3

Convert Space

An example of using Object.convert_space to test. Add a bone constraint, run script in pose mode with at least one pose bone selected. Should see that the bone remains in place and all constraint influences are zeroed. Note have used the default quaternion rotation

import bpy
context = bpy.context
ob = context.object

for pb in context.selected_pose_bones_from_active_object:
    M = ob.convert_space(
            pose_bone=pb,
            matrix=pb.matrix,
            from_space='POSE',
            to_space='LOCAL_WITH_PARENT',
            )

    if pb.constraints:
        for c in pb.constraints:
            c.influence = 0
        loc, rot, scale = M.decompose()
        pb.location = loc
        pb.rotation_quaternion = rot
        pb.scale = scale

Baking to fcurve

Similarly will set frames for each in scene frame range, and store the matrix calculated as above for each frame for each selected pose bone.

Then create an action, and keyframe it from the data. I have only added location

import bpy
import numpy as np
from collections import defaultdict

context = bpy.context
scene = context.scene
ob = context.object 
frames = np.arange(scene.frame_start, scene.frame_end + 1)

data = defaultdict(list)

for f in frames:
    scene.frame_set(f)
    for pb in context.selected_pose_bones_from_active_object:

        M = ob.convert_space(
                pose_bone=pb,
                matrix=pb.matrix,
                from_space='POSE',
                to_space='LOCAL_WITH_PARENT',
                )
        data[pb].append(M)    


action = bpy.data.actions.new(f"{ob.name}_BAKE")
action.id_root = 'OBJECT'

fcurves = action.fcurves

def flatten(a, b):
    c = np.empty((a.size + b.size,), dtype=b.dtype)
    c[0::2] = a
    c[1::2] = b
    return c

for pb, mats in data.items():
    # remove or de-infuence constraints
    for c in pb.constraints:
        c.influence = 0
    locs = np.array([M.translation for M in mats]).T
    for i, d in enumerate(locs):
        fc = fcurves.new(pb.path_from_id("location"), index=i, action_group="Bake")
        fc.keyframe_points.add(len(frames))
        fc.keyframe_points.foreach_set("co", flatten(frames, locs[i]))
    # similarly as above for rots etc..
    rots = np.array([M.to_euler() for M in mats]).T

ob.animation_data_create()
ob.animation_data.action = action

Note tested this with constraints rather than NLA stack, in concept "should" be the same and create one animation based on visual transform. Might need to ob.animation_data.use_nla = False to turn off the influence of NLA. (Similar to setting constraint influences)

Note the flatten method was quickest method suggested here https://stackoverflow.com/questions/5347065/interweaving-two-numpy-arrays

batFINGER
  • 84,216
  • 10
  • 108
  • 233
  • 1
    Apologies for time to get back. Upgraded my blender build to python 3.8.2 had issues. Give me a hoy if need explanation of anything above – batFINGER Apr 03 '20 at 08:26
  • Thanks a bunch, @batFINGER for your time and such an detailed answer! I'll start going through it right now, will take me some time though haha. convert_space() is a really neat function btw. –  Apr 03 '20 at 16:30
  • Can I ask, what does flatten() function do exactly? I can't figure it out. –  Apr 03 '20 at 17:13
  • 1
    To use the quicker foreach_set to set frame, value pairs on keyframe, kf.co = (f, v) needs to be a flat list eg [f0, v0, f1, v1, f2, v2]] Flatten takes two arrays [f0, f1, f2] and [v0, v1, v2] and interweaves (ravels, or flattens) them. – batFINGER Apr 03 '20 at 17:20
  • 1
    Used numpy since can quickly convert a frames x 3, ie number of frames x number of components [(x, y, z), (a, b, c), (d, e, f)] array to [(x, a, d), (y, b, e), (z, c, f)] giving all the x, y and z values. In this case I have use .T or transpose. Can also reshape, or column stack. I'm a bit of a noob at numpy. – batFINGER Apr 03 '20 at 17:26
  • 1
    oh and c[0::2] = a is set values starting at 0 and every 2nd item onwards of c to a. – batFINGER Apr 03 '20 at 17:39
  • Thanks a lot! The only question left I have is if I have multiple animations to bake, in terms of parsing the visual transforms, I can just use animation_data to set each animation as active, and parse them by moving through the timeline right? –  Apr 03 '20 at 17:51
  • Not quite sure what you mean, or how your animations is setup. Have used scene start and end frame by way of example. If your animation is controlled by "myAction" then use the actions frame range frames = np.arange(action.frame_range[0], action.frame_range[1] + 1), (and name new action accordingly) or just hard code it. – batFINGER Apr 03 '20 at 18:14
  • I meant like binding an animation to the object with ob.animation_data.action = action1, getting the visual transforms into a dictionary and baking it, and then proceeding to the next animation ob.animation_data.action = action2. I haven't implemented it yet but I think this is correct. –  Apr 03 '20 at 20:50