1

I have a PTU system whose transfer function I need to determine. The unit receives a velocity and position, and move towards that position with the given velocity. What kind of test would one perform for determining the transfer function...

I know Matlab provides a method. The problem, though, is that I am bit confused on what kind of test I should perform, and how I should use Matlab to determine the transfer function.

The unit which is being used is a Flir PTU D48E

---> More about the system

The input to the system is pixel displacement of an object to the center of the frame. The controller I am using now converts pixel distances to angular distances multiplied by a gain $K_p$. This works fine. However, I can't seem to prove why that it works so well, I mean, I know servo motors cannot be modeled like that.

The controller is fed with angular displacement and its position now => added together give me angular position I have to go to. The angular displacement is used as the speed it has to move with, since a huge displacement gives a huge velocity.

By updating both elements at different frequency I'm able to step down the velocity such that the overshoot gets minimized.

The problem here is: if I have to prove that the transfer function I found fits the system, I have to do tests somehow using the ident function in Matlab, and I'm quite unsure how to do that. I'm also a bit unsure whether the PTU already has a controller within it, since it moves so well, I mean, it's just simple math, so it makes no sense that I'll convert it like that.

Ugo Pattacini
  • 4,005
  • 1
  • 14
  • 36
Carlton Banks
  • 327
  • 4
  • 10
  • First comment. It's quite expected that a simple controller as the one you designed performs well in practice: it's the nature of feedback controllers that are able to make complicated systems work even with simple laws. Second comment. It doesn't seem to me you came up with a transfer function, but rather with a description of your controller - which is fine in the end - you don't need always to precisely identify the plant. – Ugo Pattacini May 02 '15 at 08:45
  • Third comment. If I were you I would get rid of position and thus control solely with velocity $v_{x,y}=K_p \cdot p_{x,y}$, where $p_{x,y}$ are the pixels coordinates where your target (i.e. the face) currently is, computed with respect to the center of the image. – Ugo Pattacini May 02 '15 at 08:47
  • The vision program calculates the pixel distance of the center of the face to the center of the image in pixels. The controller then converts this into a angular velocity. – Carlton Banks May 02 '15 at 09:20
  • No but i would like a transfer function.. even though i don't see the benefit of having one.. I am just a bit afraid that i am using the onboard controller on the system already which wouldn't be that good. – Carlton Banks May 02 '15 at 09:25
  • I've expanded my answer. – Ugo Pattacini May 02 '15 at 10:32
  • I am not quite sure i understand how you use vision and motor feedback for calculating the accurate speed for the speed. It's clear that you using a cascade controller to the system.

    I understand the first feedback=> which what I am basically doing feeding motor position back to controller. which outputs a velocity.

    But how are you implementing the second feedback loop. I quite unsure on that.. A model would be very helpful here...

    +1000000 for the answer .. => i new here so aren't able give you anything :(

    – Carlton Banks May 02 '15 at 10:59
  • So this how i understand your model...

    http://snag.gy/RftQl.jpg

    I used draw.io to draw it..

    – Carlton Banks May 02 '15 at 11:21
  • which is also how i have designed it now.. I very interested in knowing how you add the vision loop.. – Carlton Banks May 02 '15 at 11:28
  • Some more info in the answer below... – Ugo Pattacini May 02 '15 at 12:41
  • Does your model consist of 2 loops or is just one loop. I am becoming a bit confused.

    The math you have written isn't the same as i have done, but i am already able to give the PTU an accurate position.

    – Carlton Banks May 02 '15 at 12:48
  • this is how i convert pixel distance to angular

    This is how i am doing it http://answers.opencv.org/question/56744/converting-pixel-displacement-into-other-unit/

    – Carlton Banks May 02 '15 at 12:55
  • I am confused on how your close loop system looks like.. Would it be possible to draw it? – Carlton Banks May 02 '15 at 13:07
  • It's the same math in the end; the matrix notation is more formalized though. There is one inner loop dealing with velocity control and an outer loop closed by the vision system itself. – Ugo Pattacini May 02 '15 at 14:30
  • I am certain that your idea is very good => but i am just having a hard time understanding what each block has as input and output is.. and where the feedback contributes for input and so on.. I drawing would help a lot. – Carlton Banks May 02 '15 at 15:27
  • The diagram is actually the same you sketched out. The visual error is given by the vision processing block that detects as current feedback the position of the face's centroid, while the set-point is the image center (fixed over time). – Ugo Pattacini May 02 '15 at 18:15
  • I am not.. are talking about two separate vision blocks here?.. Or still the same.. that only output the displacement from the center? – Carlton Banks May 02 '15 at 18:52
  • Yep, the same vision block providing the distance of the face from the center: it's a feedback/measurement indeed. I think I'm done with this :-) The ingredients are pretty much all those in the recipe below. – Ugo Pattacini May 02 '15 at 19:58

1 Answers1

1

Imagine for a moment that you keep the input velocity fixed throughout the identification experiment, then you might inject into the system a sudden change in the final commanded position set-point while measuring as feedback the current position of your equipment (e.g. joint encoders). You will thus come up with a bunch of profiles of commanded vs. feedback positions for your identification goal. To this end you can profitably rely on the ident tool of the MATLAB System Identification Toolbox.

Explore the system response against different input position steps and remember to validate any result over profiles sets that you did not use during identification.

Finally, you could assume that varying the input velocity will have an impact on the internal controller responsivity, since of course what you're going to model is the whole apparatus made up of the internal actuators, controller, etc. In theory, you should repeat the identification experiment over a range of different input velocities.


I'll expand hereinafter a little bit further, given the fresh info you provided.

It's clear that there is an internal controller that converts your velocity input in a proper signal (usually voltage) actuating the motors. If you don't trust this internal loop, then you have to identify the plant and apply compensation as follows.

Setting: identification of a system controlled in velocity. Hence, input $=$ commanded velocity $v$; output $=$ encoder feedback $\theta$.

Procedure: you inject a chirp in velocity and you collect encoders. You can use ident to come up with a transfer function of your motor controlled in velocity at "high-level". This transfer function should resemble a pure integrator but it won't. What makes this difference needs to be compensated with the design of your velocity controller. This procedure has to be repeated for the two axes of the PTU. How to design a proper controller by putting its poles and zeros it's a matter of knowledge you should have; to do that of course you'll exploit the identified transfer function.

Note: you don't have vision in the loop yet, just position feedback from the encoders. This way you can refine the velocity control of your system, so that in the end, given a target angular position $\theta_d$ where you want to go, you know how to form the proper velocity commands $v$ to send to the device at run-time, while reading back the corresponding encoders $\theta$.

Then vision kicks in. The vision processing will tell you where the face centroid $p_d$ is with respect to the image center; this information is refreshed continuously at run-time. Then, using the intrinsic parameters of the pinhole model of your camera, you'll have an estimate of which angular positions this pixel corresponds to.

This is not that difficult to determine. Knowing the centroid coordinates $p_d$ and assuming that we know how far the face lies from the camera (let's say 1 m but we don't care about the real distance), that is we know its $z$ component in the camera reference frame, the pinhole model gives us a way to find out the face $x$ and $y$ components in the camera frame. Finally, trigonometry provides you with the delta angles to add up to the current camera encoders that will in turn let you compute the absolute target angular positions. These latter values will represent the angular set-point for the above velocity controller.

Here comes the math

Given $p_d=\left(u,v\right)$ the face centroid and $z$ the distance of the face from the camera, it holds: $$ \left( \begin{array}{c} x \\ y \\ z \\ 1 \end{array} \right) = \Pi^\dagger \cdot \left( \begin{array}{c} z \cdot u \\ z \cdot v \\ z \end{array} \right), $$

where $x,y,z$ are the Cartesian coordinates of the face in the camera frame and $\Pi^\dagger$ is the pseudoinverse of the matrix $\Pi \in \mathbb{R}^{3 \times 4}$ containing the intrinsic parameters of your camera (i.e. the focal length, the pixel ratio and the position of the principal point - browse internet for that - there are standard procedures to estimate this matrix). We are not interested in $z$, so that you can put in the above equation whatever value for $z$ you want (say 1 m), but remember to be consistent in the following. Given $u,v$ you get $x,y$ as output.

Once you have $x,y$ you can compute the angular variations $\Delta\phi_p$ and $\Delta\phi_t$ for the pan and the tilt, respectively: $$ \Delta\phi_p=\arctan\frac{x}{z} \\ \Delta\phi_t=-\arctan\frac{y}{z} $$

Finally, the absolute angular positions used as set-point will be: $$ \phi_p:=\phi_p+\Delta\phi_p \\ \phi_t:=\phi_t+\Delta\phi_t $$

Alternatively, we could also identify the whole system with the visual feedback in place of the motor encoders (visual servoing). Here, the transfer function will tell us the impact of a velocity command directly on the displacement a pixel undergoes. Intuitively, this identification will be more difficult because we put everything together and it's likely that we won't achieve the same performance of the first method.

Ugo Pattacini
  • 4,005
  • 1
  • 14
  • 36
  • I am not quite sure i understand what kind of test you want me to perform.. the problem with the system is that if i give a velocity and position, it goes toward the position with a given velocity but at some point i begins to decrease due to an onboard controller. response.

    the last part isn't confirmed yet, since the feedback says the speed is the same even though it's not..

    I am at the moment trying run it in velocity mode, such that i don't give it position but just a velocity.. and then input a velocity => wait fixed time interval => output displacement and then repeat..

    – Carlton Banks May 01 '15 at 14:27
  • The transfer function you should aim to get would be commanded position to joint position. It's normal that the velocity is not kept constant since the device will be going through an initial acceleration phase, a steady-state phase wherein the speed will be pretty much the one given, and a final deceleration phase while approaching the target position. That's the usual trapezoidal shape of the speed profile (or whatever profile it is) we are not interested in. What does matter is the final position profile generated along the way. – Ugo Pattacini May 01 '15 at 14:56
  • Of course, you could consider identifying commanded velocity to joint position in velocity mode, but that's a different plant. – Ugo Pattacini May 01 '15 at 14:58
  • I am thinking creating a cascade control (since i want it to react quickly), such that i only provide velocity to the Plant, and the controller will be given the where it should go, and where it is.. But how should i do it for the velocity.. ? – Carlton Banks May 01 '15 at 16:26
  • The task has now become how to control your device... There are too few information you have provided and the goal is somewhat broad. Start off with $v=K_p \cdot \left( \theta_d - \theta \right)$. – Ugo Pattacini May 01 '15 at 18:09
  • Well.. that what i am actually doing.. which actually works fine enough.. i mean it doesn't overshoot that much but it react too slowly.

    I mean The displacement has be pretty big before it react.. and when it react it i know it is deemed to overshoot..

    – Carlton Banks May 01 '15 at 18:31
  • I am tracking faces and convert pixel displacement into angular displacement.. which i then multiple with a factor (kp)..

    The position and velocity (kp) are both different...

    I update My velocity with a higher freq then my position such that overshoot is minimized, which i also thereby somehow think i have a Cascade close loop.. But My solution is purely math based i mean.. Its a bit a of miracle that the pixel to angular conversion fits that well..

    – Carlton Banks May 01 '15 at 18:46
  • Its just my control consist of some if statement that kinda make is say.. like if the velocity calculated becomes above something.. then use max speed, and if its below something its 0..

    there isn't some actual "science" behind its just made using common sense and some good luck i guess.. thats irritating me a bit..

    – Carlton Banks May 01 '15 at 18:57
  • And for values in the middle i use that formula.. – Carlton Banks May 01 '15 at 19:04