Welcome to the module 4 of the course Control of Mobile Robots.

So, in the last module we've learned about linear systems and we saw where

they came from and at the end we even managed to control them a little bit and

in fact we felt rather good about ourselves.

because we could design a state feed back controller for a point mass that

stabilized it, but at the same time we were a little queasy and uneasy about

this whole thing because we had to have x meaning the state to do our control

design but in reality, we actually don't have x.

We have y, the output. So what this module is devoted to is

trying to first of all, be systematic in how we do the control design and,

secondly, how do we actually overcome this seeming paradox of needing x but

only having y. So what we're going to do in the first

lecture, is stabilize the point mass. We're going to return to our old friend,

and in general, if I have a linear system, x dot is Ax + Bu, y is Cx. The

dilemma, as I've already stated is that we seem to need x for stability but all

we really have is y. So here is the game play.

We're going to ignore the fact that we don't have y.

Instead we're going to design our controller as if we had the state itself

and then somehow we're going to hope that we can figure out the state from the

Measurements, meaning from y, and this is the game plan we're going to pursue

throughout this entire module. And, the first step is, of course to

design u as if we had x. So step one is to, do the control design

and we're going to use a method called, pole placement.

And, pole placement is a rather powerful idea.

So if I have my point mass system again, x dot is Ax+Bu, where we have our old

friends the A and B matrices that we've seen over and over again.

Well, state feedback means that, what we are going to do is we're going to pick u

is -Kx. Where K in this particular situation is a

1 by 2 matrix, so it has 2 components. k1 and k2.

And those are, are gains. And we've already seen in the previous

module, that k1 is a gain that looks at precision.

And k2 is a gain that looks at velocity. And by tweaking them, somehow, we can get

the system to be, behave well. 'Cuz we've already seen that.

So, the one question we need to ask first, is, of course, is, how do we

actually pick these control gains? Meaning, what should k be? Well.

Here is the whole idea behind pole placement.

When we plug in U is minus KX we get a closed loop system.

And what we're going to do, is we're going to pick K such that the closed loop

system has the right eigenvalues. And right meaning, we get to pick them.

And the risk is called pole placement, its that eigen values of the system

matrices are sometimes referred to as poles and what we're going to do is we're

going to make them be what we want them to be, in particular we want them to have

negative real part because that is what we need for asymptotic stability.

So but before we do that we actually need to figure out how do we computer

eigenvalues? So in general, if I have a matrix M, this doesn't have to be a 2 by

2, this is just some general M, then every square matrix M has a so-called

characteristic equation associated with it.

And it's given by this chi M of lambda, and it's kind of a mouthful.

It's the determinant of lambda times the identity matrix -M.

And then we set this determinant equal to 0.

And the lambdas that solve this are Eigenvalues, well, let's see what this

means. If I have a 2x2 system and equal to M1,

M2, M3 and M4 then lambda I, meaning lambda times the identity minus M.

Well it's lambda times the identity minus M, well, if you plug this in, you get the

following matrix. Okay, now, let's take the determinant of

this matrix. So the determinant, well, is this object

you get by taking this element times this element.

And then you subtract away this element times that element.

this is how you do it for 2 by 2 matrices.

in general it can become even more complicated.

But in this case, I get this times that which shows up as lambda - m1 * lambda -

m4. And then I get -m2, or - - m2 * -m3,

which shows up like this. So this is the, the determinant of this 2

by 2 M matrix. Okay. We need to set this determinant equal to

zero, so aa, carrying out the multiplications we have second order

equation that we have to solve for lambda in order to be able to find Eigenvalues,

okay. Lets try to that, we have this

equation.The way we solve second order equations,

is while there are formulas for this it is possible to do it.

In this case it turns out that lambda is this rather annoying looking expression

here, but this is what the Eigenvalue, the 2 Eigenvalue to this 2x2 matrix would

be. But it is annoying.

I really don't want to do this. So the question is there an easier way of

making the Eigenvalues be what we would like them to be.

Turns out that the answer is, yes. There is something called the fundamental

theorem of algebra. And this fancy looking, or fancy sounding

theorem says that, if I have a polynomial, the roots to that polynomial

are determined by the coefficients, which means that you know what? I actually

don't have to solve this equation. Here I have coefficients in front of

lambda then here I have the coefficients that aren't in front of any lamda, those

coefficients alone are enough to implicitly but completely determined

eigen values. So what we're going to do is we're

actually not going to solve this. We're just going to stop here and say,

fine, lets start massaging the coefficients direct.

So if we go back to our point mass again. I pick u as -kx.

Then I get x dot is (A - BK) x We've seen this before.

this is the closed loop dynamics. And in particular if I plug in what k is

I get A - BK being this two matrices here.

And, if I compute that, I get 0, 1, -k1, -k2.

And I, encourage all of you to perform this multiplication at home, just to make

sure that you trust that this is indeed Now let's compute the igon values or at

least the co-efficients in this thing called the characteristic equation.

So, chi A minus BK lambda is the determinent of this matrix or the

negative of that matrix plus Lambda times the identity.

So, this is what I have here of course is lambda I - (A - BK).

And, if you compute this determinant, you get, lambda ^ 2 + lambda k2 + k1.

That's not so bad. And the neat thing here is that again,

all we care about These coefficients. The things that determine what the roots

are without us actually having to complete the roots.

Now why does that help us? Well, what we're going to do now is we're going to

pick our favorite eigenvalues in the whole world.

We're going to pick the eigenvalues that we would like the system to have, and if

we somehow magically manage to make the closed loop system have these

eigenvalues, then the characteristic equation would be lamda minus lamda 1

because the characteristic equation has to be 0 has to be lamda 1 as a root.

And if I plug in lamda 1, I get 0 here. Similarly if I plug in lamda 2 I get 0

here, and lamda and 0 here. So what I have is a product of lamda

minus these desired favorite eigenvalues in the whole world.

So let's do that. so for the robot or the point mass, I'm

going to pick both eigenvalues at -1. I need, I know that they need to have

negative real part, well, -1 is particularly simple Because then I get

this phi of lambda which is this desired characteristic equation, not the actual

characteristic equation but the desired one.

It's just lambda + 1 * lambda + 1 or, if I carry out this multiplication, I get

lambda ^ 2 + 2 lambda + 1. Now, what we need to do is simply line up

these coefficients with the actual coefficients that we have.

So if I do that, I see, this is the characteristic equation.

This is what I would like it to look like.

Well, here are the coefficients in front of lambda.

And here are these coefficients that are hanging out by themselves.

All we do now is simply line these up. So k2 has to be = 2, k1 has to be = 1 and

wahlah, I've actually designed the k matrix that I need.

So now all I do is I plug this in to my original system which is x dot is Ax + Bu

but I've closed the loop right now with u being -Kx and I have successfully

stabilized the system, by placing the eigenvalues exactly where I would like

them to be.