I have a project that uses the (Intel) RealSense camera and I am capturing a short set of frames that feature a single face. The camera SDK provides me with a good way to do landmark recognition in image space. So I can tell where the nose/mouth and other features are located.
I have some strict constraints on what I want the final set of frames to be stabilized like. And have been able to meet those constraints by using the video stabilization system manually in Blender. Unfortunately I need to process quite a few of these, and manual editing is out of the question.
My question is: Is there a way for me to programmatically supply the initial tracking points for the stabilization system, and have it process the whole video using those initial points.
For example: I provide ears and nose (x & y image space coordinates), as the points to track in the first frame. And Blender uses them to track + stabilize the rest of the frames.
Thanks for any advice, and please tell me if you need more detail for this question.