2

Good morning,

For a project, I created in MATLAB a scene composed of an object and twelve cameras looking at it. With the tool box of MATALB, I got the rotation matrix and the translation vector for each camera.

Now I would like to use these information (rotation matrix and translation vector) to create the same scene in Blender.

So I want to use the translation vector to position correctly my cameras in the space, so I will modify the location parameters in Blender with it. I want to use the rotation matrix to get the correct rotation of my cameras in the space. So I will modify the rotation parameters with it. The panel I am using is this one:

enter image description here

For the rotation, I know in Blender, I can use the quaternion, the euler rotation angles (XYZ, YXZ, ZYX) or the axis vector. I calculated all of them with MATLAB but unfortunately they don't give the correct result. The cameras don't look in the correct direction as shown on the following pictures:

enter image description here

Camera's view:

enter image description here

What I would like is the camera looking at the object like on the following picture but with the parameters calculated by MATLAB. In this example, I use the object constraint properties: enter image description here

So, in the view of the camera, we can see the object: enter image description here

The rotation matrix of the first camera is:

enter image description here

Its translation is:

enter image description here

The XYZ angle calculated with this matrix by MATLAB is: enter image description here

I read that MATLAB and Blender don't have the same coordinate system. How can I do that?

Thanks for your help.

Noella
  • 47
  • 5
  • We are going to need a lot more info in order to address your issue. Could you please add some screenshots of your setup and an example of what is different between the two? – PGmath Mar 10 '21 at 13:13
  • Hi,

    I would like to use the parameters calculated by MATALB to define my model to built it with Blender.

    Sorry, I will ask a stupid question but how I can share here the screenshots from my computer?

    – Noella Mar 11 '21 at 14:59
  • See this meta post. Use the add image button when editing your post. – PGmath Mar 11 '21 at 15:08
  • Thanks for your help. Let me know if it is still unclear. – Noella Mar 13 '21 at 10:15
  • Welcome Noella... One particular thing missing here is, values of the MATLAB rotation matrix and what the rotation is. Eg Euler XYZ in degrees of (23, 45, 55) This will tell us if it is row or column order.... or look it up. – batFINGER Mar 13 '21 at 11:31
  • 1
    See https://blender.stackexchange.com/a/176762/15543 the camera in blender looks down its -Z axis and pitches on Y. If you know the MATLAB camera FORWARD and UP can use https://blender.stackexchange.com/questions/102293/how-to-properly-use-exporthelpers-axis-conversion-method – batFINGER Mar 13 '21 at 11:39
  • Thanks for your answer. I edited my post with the information you asked me. I had a look at your two links. The first one doesn't seem to answer my question. The second could be a possibility. The problem is I don't answer how to use it. I tried the code but I don't understand how to indicate to Blender that I want to use the camera axes, and I don't understand how to pass to the function the rotation angles calculated by MATALB. – Noella Mar 15 '21 at 11:57

1 Answers1

3

Some rattling on.

Not sure one arbitrary rotation matrix and one location vector with very little more info is enough. Instead of a very long commment, here is some ways to try and work this out.

The matrices given.

>>> print(R)
<Matrix 3x3 (-0.4877,  0.8728,  0.0190)
            ( 0.4404,  0.2648, -0.8578)
            (-0.7537, -0.4100, -0.5136)>

T = Matrix.Translation((1.5581, 1.2190, 1.4871))

In blender (row order matrix) the columns of a 3x3 rotation matrix are the vectors of local axes. If the camera was placed such that it focused directly on (0, 0, 0) (In blender a camera looks down its $-Z$ axis)

For example in default blend, with a track to constraint on camera to look at cube at (0, 0, 0)

>>> C.object
bpy.data.objects['Camera']

>>> print(C.object.matrix_world.to_3x3()) <Matrix 3x3 (0.6563, -0.3579, 0.6642) (0.7545, 0.3114, -0.5778) (0.0000, 0.8803, 0.4744)>

>>> print(C.object.matrix_world.translation.normalized()) <Vector (0.6642, -0.5778, 0.4744)>

ie its local $Z$ axis matches its normalized location

if it was same for given data would expect

>>> T.translation.normalized()[:]
(0.6295621991157532, 0.4925462305545807, 0.6008740663528442)

to match a column (or row, not sure of order) of rotation matrix, unfortunately it does not,.

Using angles given to get this matrix by setting order to ZYX and transposing

>>> print(Euler((map(radians, (141.4, -48.9, -137.9))), 'ZYX').to_matrix().transposed())
<Matrix 3x3 (-0.4878,  0.8728,  0.0187)
            ( 0.4407,  0.2647, -0.8577)
            (-0.7536, -0.4101, -0.5138)>

Reversed order suggests pitch, roll, yaw type rotation.

Transposing suggests either column order of matrices, it is also a cheap way of inverting a rotation matrix.

>>> print(Euler((map(radians, (141.4, -48.9, -137.9))), 'ZYX').to_matrix().inverted())
<Matrix 3x3 (-0.4878,  0.8728,  0.0187)
            ( 0.4407,  0.2647, -0.8577)
            (-0.7536, -0.4101, -0.5138)>

Which also suggests there could be a difference in +/- for CW / CCW rotation.

>>> print(Euler((map(radians, (-141.4, 48.9, 137.9))), 'XYZ').to_matrix())
<Matrix 3x3 (-0.4878,  0.8728,  0.0187)
            ( 0.4407,  0.2647, -0.8577)
            (-0.7536, -0.4101, -0.5138)>

Seeing what happens, set the matrix world to

C.object.matrix_world = T @ R.to_4x4()

for the hell of it in this one have used

C.object.matrix_world = T @ R.transposed().to_4x4()

enter image description here

since it is easy to see in image that (if the object of interest to focus is around (0, 0, 0) it's looking the wrong way and is upside down.

Can make a conversion matrix via

>>> bpy_extras.io_utils.axis_conversion(
...         from_forward='Z',
...         from_up='-Y',
...         to_forward='-Z',
...         to_up='Y')

Matrix(((1.0, 0.0, 0.0), (0.0, -1.0, 0.0), (0.0, 0.0, -1.0)))

Or can see that flipping 180 degrees around $X$ axis would give same result.

>>> Matrix.Rotation(pi, 3, 'X')
Matrix(((1.0, 0.0, 0.0),
        (0.0, -1.0, -8.742277657347586e-08),
        (0.0, 8.742277657347586e-08, -1.0)))

So with one of the conversion matrices above as S

>>> S = Matrix.Rotation(pi, 4, 'X')
>>> C.object.matrix_world @= S

enter image description here

In as much as it doesn't exactly answer your question, I hope it gives enough info that if you have two locations and a rotation from each that points directly at a known point (eg the origin) then it would be a relatively easy task to work out the space.

batFINGER
  • 84,216
  • 10
  • 108
  • 233
  • Hi, Thanks for your answer. I tried to write the script step by step, according to your answer.

    I wrote this script (I put it in my next comment) but I have an error when I try to calculate Camera.matrix_world = T @ R.transposed().to_4x4()

    I searched on the internet, but I found nothing. Why I have the following message: expected a sequence of float, not float.

    Otherwise, I will try this script when the problem on this line will be sorted out, and see if it is the solution of my problem or the start of a possible solution.

    Thanks a lot for your help.

    – Noella Mar 17 '21 at 11:06
  • Sounds like you have missed some brackets.a = ((1, 0), (0, 1)) as opposed to b = (1, 0, 0, 1) first element of a is a sequence of floats, first element of b is a float. – batFINGER Mar 17 '21 at 11:11
  • Thanks, it was that. I tried what you told me. The position of my camera does not match with your screenshots. Is it possible by comment to share what I have on my screen and the code? – Noella Mar 17 '21 at 16:00