0:00

Welcome to the module 4 of the course Control of Mobile Robots.

Â So, in the last module we've learned about linear systems and we saw where

Â they came from and at the end we even managed to control them a little bit and

Â in fact we felt rather good about ourselves.

Â because we could design a state feed back controller for a point mass that

Â stabilized it, but at the same time we were a little queasy and uneasy about

Â this whole thing because we had to have x meaning the state to do our control

Â design but in reality, we actually don't have x.

Â We have y, the output. So what this module is devoted to is

Â trying to first of all, be systematic in how we do the control design and,

Â secondly, how do we actually overcome this seeming paradox of needing x but

Â only having y. So what we're going to do in the first

Â lecture, is stabilize the point mass. We're going to return to our old friend,

Â and in general, if I have a linear system, x dot is Ax + Bu, y is Cx. The

Â dilemma, as I've already stated is that we seem to need x for stability but all

Â we really have is y. So here is the game play.

Â We're going to ignore the fact that we don't have y.

Â Instead we're going to design our controller as if we had the state itself

Â and then somehow we're going to hope that we can figure out the state from the

Â Measurements, meaning from y, and this is the game plan we're going to pursue

Â throughout this entire module. And, the first step is, of course to

Â design u as if we had x. So step one is to, do the control design

Â and we're going to use a method called, pole placement.

Â And, pole placement is a rather powerful idea.

Â So if I have my point mass system again, x dot is Ax+Bu, where we have our old

Â friends the A and B matrices that we've seen over and over again.

Â Well, state feedback means that, what we are going to do is we're going to pick u

Â is -Kx. Where K in this particular situation is a

Â 1 by 2 matrix, so it has 2 components. k1 and k2.

Â And those are, are gains. And we've already seen in the previous

Â module, that k1 is a gain that looks at precision.

Â And k2 is a gain that looks at velocity. And by tweaking them, somehow, we can get

Â the system to be, behave well. 'Cuz we've already seen that.

Â So, the one question we need to ask first, is, of course, is, how do we

Â actually pick these control gains? Meaning, what should k be? Well.

Â Here is the whole idea behind pole placement.

Â When we plug in U is minus KX we get a closed loop system.

Â And what we're going to do, is we're going to pick K such that the closed loop

Â system has the right eigenvalues. And right meaning, we get to pick them.

Â And the risk is called pole placement, its that eigen values of the system

Â matrices are sometimes referred to as poles and what we're going to do is we're

Â going to make them be what we want them to be, in particular we want them to have

Â negative real part because that is what we need for asymptotic stability.

Â So but before we do that we actually need to figure out how do we computer

Â eigenvalues? So in general, if I have a matrix M, this doesn't have to be a 2 by

Â 2, this is just some general M, then every square matrix M has a so-called

Â characteristic equation associated with it.

Â And it's given by this chi M of lambda, and it's kind of a mouthful.

Â It's the determinant of lambda times the identity matrix -M.

Â And then we set this determinant equal to 0.

Â And the lambdas that solve this are Eigenvalues, well, let's see what this

Â means. If I have a 2x2 system and equal to M1,

Â M2, M3 and M4 then lambda I, meaning lambda times the identity minus M.

Â Well it's lambda times the identity minus M, well, if you plug this in, you get the

Â following matrix. Okay, now, let's take the determinant of

Â this matrix. So the determinant, well, is this object

Â you get by taking this element times this element.

Â And then you subtract away this element times that element.

Â this is how you do it for 2 by 2 matrices.

Â in general it can become even more complicated.

Â But in this case, I get this times that which shows up as lambda - m1 * lambda -

Â m4. And then I get -m2, or - - m2 * -m3,

Â which shows up like this. So this is the, the determinant of this 2

Â by 2 M matrix. Okay. We need to set this determinant equal to

Â zero, so aa, carrying out the multiplications we have second order

Â equation that we have to solve for lambda in order to be able to find Eigenvalues,

Â okay. Lets try to that, we have this

Â equation.The way we solve second order equations,

Â is while there are formulas for this it is possible to do it.

Â In this case it turns out that lambda is this rather annoying looking expression

Â here, but this is what the Eigenvalue, the 2 Eigenvalue to this 2x2 matrix would

Â be. But it is annoying.

Â I really don't want to do this. So the question is there an easier way of

Â making the Eigenvalues be what we would like them to be.

Â Turns out that the answer is, yes. There is something called the fundamental

Â theorem of algebra. And this fancy looking, or fancy sounding

Â theorem says that, if I have a polynomial, the roots to that polynomial

Â are determined by the coefficients, which means that you know what? I actually

Â don't have to solve this equation. Here I have coefficients in front of

Â lambda then here I have the coefficients that aren't in front of any lamda, those

Â coefficients alone are enough to implicitly but completely determined

Â eigen values. So what we're going to do is we're

Â actually not going to solve this. We're just going to stop here and say,

Â fine, lets start massaging the coefficients direct.

Â So if we go back to our point mass again. I pick u as -kx.

Â Then I get x dot is (A - BK) x We've seen this before.

Â this is the closed loop dynamics. And in particular if I plug in what k is

Â I get A - BK being this two matrices here.

Â And, if I compute that, I get 0, 1, -k1, -k2.

Â And I, encourage all of you to perform this multiplication at home, just to make

Â sure that you trust that this is indeed Now let's compute the igon values or at

Â least the co-efficients in this thing called the characteristic equation.

Â So, chi A minus BK lambda is the determinent of this matrix or the

Â negative of that matrix plus Lambda times the identity.

Â So, this is what I have here of course is lambda I - (A - BK).

Â And, if you compute this determinant, you get, lambda ^ 2 + lambda k2 + k1.

Â That's not so bad. And the neat thing here is that again,

Â all we care about These coefficients. The things that determine what the roots

Â are without us actually having to complete the roots.

Â Now why does that help us? Well, what we're going to do now is we're going to

Â pick our favorite eigenvalues in the whole world.

Â We're going to pick the eigenvalues that we would like the system to have, and if

Â we somehow magically manage to make the closed loop system have these

Â eigenvalues, then the characteristic equation would be lamda minus lamda 1

Â because the characteristic equation has to be 0 has to be lamda 1 as a root.

Â And if I plug in lamda 1, I get 0 here. Similarly if I plug in lamda 2 I get 0

Â here, and lamda and 0 here. So what I have is a product of lamda

Â minus these desired favorite eigenvalues in the whole world.

Â So let's do that. so for the robot or the point mass, I'm

Â going to pick both eigenvalues at -1. I need, I know that they need to have

Â negative real part, well, -1 is particularly simple Because then I get

Â this phi of lambda which is this desired characteristic equation, not the actual

Â characteristic equation but the desired one.

Â It's just lambda + 1 * lambda + 1 or, if I carry out this multiplication, I get

Â lambda ^ 2 + 2 lambda + 1. Now, what we need to do is simply line up

Â these coefficients with the actual coefficients that we have.

Â So if I do that, I see, this is the characteristic equation.

Â This is what I would like it to look like.

Â Well, here are the coefficients in front of lambda.

Â And here are these coefficients that are hanging out by themselves.

Â All we do now is simply line these up. So k2 has to be = 2, k1 has to be = 1 and

Â wahlah, I've actually designed the k matrix that I need.

Â So now all I do is I plug this in to my original system which is x dot is Ax + Bu

Â but I've closed the loop right now with u being -Kx and I have successfully

Â stabilized the system, by placing the eigenvalues exactly where I would like

Â them to be.

Â