0:00

So the outcome of the last handful of lectures, was that we needed something

Â more rich to solve complex navigation problems, and that something was wall

Â following. In fact was saw that we really had 2

Â barriers, wall following clockwise and wall following counter clockwise.

Â And, the way we could encode that was to take our avoid obstacle behavior and

Â simply flip, Either flip it -pi/2 for a clockwise negotiation of the obstacle or

Â +pi/2 for a counterclockwise negotiation of the obstacle.

Â What I want to do today is relate this wall-following behavior to the induced

Â mode when we looked at type 1 zeno in hybrid systems.

Â And the point with this is really for us to first of all, trust that this is the

Â right thing to do. Trust that we understand alpha.

Â And trust that we understand plus or minus there.

Â We use some kind of inner product rule to determine whether or not we should go

Â plus and minus. Now we're going to see that that's,

Â indeed. the correct rule from a sliding mode

Â vantage point. However we're going to do little bit of

Â maths today and at the end of it we're just going to return back to this and

Â say, this is how, still how we going to implement it because it is much simpler

Â but we need in the math to get there and trust a that's correct.

Â So here's the general set, set up. As before we have an obstacle x sub o.

Â We have a goal, X of G, and we have X, which is the position of the robot.

Â We also have a distance from the obstacle, when we're going to switch to

Â avoid obstacle as opposed to go to goal, and even though I'm doing everything with

Â points now This works for non-convex obstacles, for pretty much anything.

Â We can write that down at least in this way.

Â And the distance then being constant, let's say it's delta from the obstacle,

Â when I can simply say that that means that the distance between the X and XO is

Â equal to Now, what do I have? I have 2 different behaviors.

Â I have one behavior that wants to take me towards the goal and I have another

Â behavior that wants to push me away from the obstacle.

Â And now I also have a switching surface and I'm going to write this as the

Â distance between X and the obstacle minus delta, should be equal to 0.

Â But, I'm going to put squares in there because I'm going to start taking

Â derivative and taking the derivative of the square of a norm is easy taking the

Â derivative of a norm is not so easy, and then I'm going to put the half here just

Â for the reason of getting rid of some Coefficient but this half doesn't change

Â anything. So now what do I have? I have g.

Â On one side I have g positive, which means that you're further away from the

Â obstacle than delta, which means you're out here where you're going to use this

Â behavior. So we have f1 coming in here, so this is

Â going to be my f1. And then I have g negative on this other

Â side, which is inside here, where I'm going to use this behavior.

Â So that's going to be equal to f2. So I have everything I need to be able to

Â unleash our induced mode piece of mathematics.

Â So, f1 is goal to goal. f2 is avoid obstacle.

Â Now, we need to connect these somehow with the induced mode.

Â Well, here is the connection. We actually computed the induced mode.

Â It was this convex combination of the two modes, or the two behaviors.

Â And this convex combination was given by this, mouthful of Of an expression.

Â But let's actually try to compute this, in this case, to see what, what the

Â induced mode should be. Well, first of all, we need the Lee

Â derivatives. So, lf2g, if you remember.

Â That was dg, dx*f2. We need the same thing for f1 and then

Â this lead derivative show, show up repeatedly.

Â Well, first of all the derivative of g with respect to x is simply x-x obstacle

Â transpose and this is the reason why I put squares here because that made

Â everything easy and I put the half there because that hills an extra 2 that would

Â show up. This really doesn't but it just makes the

Â math a little bit easier. This is again one of these things that I

Â encourage you to try to compute yourselves, just to make sure that you

Â actually trust that this is indeed the correct answer.

Â Well, now I can compute the [UNKNOWN] derivatives, right? I have LF2G, well

Â it's DG/DX times F2 Well the GDX which is computed it was that.

Â F2 is C avoid obstacle, X-Xo. In previous lecture, I used K, with the

Â prime index [INAUDIBLE] was C here. Well, this is X-Xo transposed times X-X0.

Â But that's just X-X0 squared, the norm squared.

Â So this lead derivative has a rather simple expression.

Â Similarly, I can compute the other lead derivative.

Â And its' c, goal to goal times this thing, that we now know is an inner

Â product of x-x obstacle, transposed times x goal minus x.

Â So I have the 2 lead derivatives that I actually need.

Â So, with that, I could go ahead and compute the induced mode.

Â For instance. You know, this little thing here.

Â What is that? It is let's see, it's (x-xo) transpose times (CAO*(x-xo)),

Â that's the first term, minus C goal to goal times (xg-x).

Â So, that's that term. We have an explicit expression for it.

Â We can also go ahead, and compute this, right, for instance.

Â It's CAO, x-x0 transposed times f1. Which is, what was f1 again? It was goal

Â to goal times xg - x..

Â So I can compute this. Similarly, I can compute that.

Â The point is first of all that everything is entirely computable here.

Â The other point is, you know what, this is a little bit of a mess.

Â It's a mess to write it down, but what we've actually done is [SOUND] We have

Â recovered the same controller because what we're doing is again we're sliding.

Â The only difference is if you write it in this form, you automatically get alpha to

Â pop out because you get the certain scaling, and you get plus or.

Â Minus flip. So you actually get the flip for free,

Â your told which direction to go and which alpha to go in and the nice thing is that

Â the flip direction you get from computing the induced mode is actually the same as

Â taking the inner products with U follow ball counter-clockwise with U avoid

Â obstacle, if this inner. U avoid obstacle.

Â If this inner product is positive, we go, counter clockwise, and otherwise we go

Â clockwise. So,

Â the nice thing is, we have actually ,in a mathematically rather involved way arrive

Â at same expression. And with the difference being that we can

Â have the plus or the minus automatically determined for us, and these scaling

Â factors automatically determined for us. In practice though, we're not going to do

Â this, because this is too messy. Instead, we're just going to pick some

Â alpha that we feel good about. I always pick alpha=1 because I'm lazy

Â and then use the inner product tasks to figure out whether we should go clockwise

Â or counter-clockwise. So, that's practically what we're going

Â to do. Now, that's not enough, so let's say that

Â I"m going towards the goal here. Here I want to go in this direction, and

Â avoid obstacle once to take me there, so sliding is immediately going to tell me

Â that I'm going to start moving up like this.

Â Well you know what? This was all good and Dandy, but if I'm simply looking at the

Â sliding rule. Then here, all of a sudden I'm pointing

Â in both direction, and sliding is going to tell me to stop, So what I need to do

Â is to just keep following the ball, verify then I'm going to follow the ball

Â for a long time way and way and way, around and around and around and may be

Â here the right time to stop following the ball.

Â The question that really need to answer now.

Â When we know that follow wall is the right thing to do, we know which

Â direction to go in, and we know, really how to scale it even though we're just

Â going to scale it by one because it really doesn't matter is when do we

Â actually stop this sliding or follow wall.

Â Well, that turns out not to be so easy and in fact there are multiple ways of

Â Answering that and this is precisely the topic of the next lecture.

Â