Now let's talk a little bit about control. Again, I'm gonna focus on the vertical direction. So let's think about controlling height. What you would like to do is to drive the robot to a desired vertical position either up or down. Let's use x to measure the vertical displacement. Clearly, the acceleration is given by the second derivative of position. If you look on the left hand side, you'll see the sum of the forces. Let's call u the sum of the forces divided by the mass. So you have now a very simple second order differential equation with an input u and a variable x, such that u is equal to the second derivative of x. Our goal to control this vehicle is to determine the function u, such that the vehicle goes to the desired position x. So here is the control problem. The system we have is a very simple system. It's a second order linear system. You're trying to figure out what u of t, what function of t that is u, drives x to a desired position x desired. If you have a desired trajectory, in other words x desired is a function of time, you want to synthesize the control input u of t that allows the vehicle to follow the desired trajectory. In order to do that let's define an error. The error function is essentially the difference between the desired trajectory and the actual trajectory. So the larger the error, obviously, the further the deviation from the actual trajectory, from the desire trajectory. What you'd like to do is to take this error and decrease it to zero. More specifically we want this error to go exponentially to zero. In other words, we wanna find u of t, such that the error function satisfies the second order differential equation. Why this differential equation? Well, in this differential equation, there are two unknowns, Kp and Kv. If I select appropriate values of Kp and Kv, more specifically, if I ensure that these values are positive, I can guarantee that this error will go exponentially to zero. The control input that achieves that is given by this very simple equation. Again, the only reason I'm pulling out this control equation is because I want the error to go exponentially to zero. And that'll ensure that x tends to x desired. There are two variables in this equation, one is K sub p, and the other is K sub v. You'll see that K sub p multiplies the error, and adds the error times Kp to the control function. K sub v multiplies the derivative of the error, and adds that to the control function. So one is called the proportional gain, the other is called the derivative gain. And in addition, you need some knowledge of how you want the trajectory to vary. So you're feeding forward the second derivative of the desired trajectory. This is often called the feedforward term. And this completes your control law or the control equation at then you can use to drive your motors. Here's a typical response of what the error might look like if you use such an approach to control. The error starts out being non zero, but quickly settles down to the zero value. The error might undershoot, go from a positive value to a negative value, but eventually you're guaranteed that it'll go to zero. To summarize, we've derived a very simple control law. It's called the proportional plus derivative control law, which has a very simple form. It has three terms, a feedforward term, a proportional term, and a derivative term. Each of these terms has a significance. The proportional term acts like a spring or a capacitance. The higher the proportional term is, the more springy the system becomes and more likely it is to overshoot. The higher the derivative term, the more dense it becomes. So this is like a viscous dashpot or a resistance in an electrical system. By increasing the derivative gain, the system essentially gets damped, and you can make it overdamped so that it never overshoots the desired value. In exceptional cases, you might consider using a more sophisticated version of the proportional plus derivative control. Here you have an extra term, which is proportional to the integral of the error. You often do this when you don't know the model exactly. So, for instance, you might not know the mass, or there might be some wind resistance that you need to overcome, and you don't know a priori how much this wind resistance is. The last term essentially allows you to compensate for unknown effects caused by either unknown quantities, or unknown wind conditions, or disturbances. The downside of adding this additional term is that your differential equation now becomes a third-order differential equation. The reason for that is you've suddenly added an integral in the mix and if you want to eliminate the integral you have to differentiate the whole equation one more time introducing a third derivative. However, the benefit of this is that this integral term will make the error go to zero eventually. So here are three examples of the system based on what values you pick for the proportional gain and the derivative gain. If both the gains are positive, you're guaranteed stability, as you see on the left side. If K sub v is equal to 0, then you're guaranteed marginal stability. While the system will not drift, you'll find it'll oscillate about the desired value. Of course, if one or the other gain is negative, then you essentially get an unstable system. You can similarly explore the effect of the integral gain which we haven't done in this picture. I now want to deal with a complete simulation of the quadrotor. Because we'll now require three independent direction we're gonna now introduce x, y, and z coordinates. And this time the z coordinate points up. So here's a simulation of the quadrotor and again, we're using a proportional derivative control to control height. For the moment, we're ignoring the other variables. We're adding terms to make sure that the lateral displacement is zero, the roll and pitch stays zero, and the yaw stays zero. But we don't have to worry about that for the present moment. We're only construing the proportional derivative control of height. And you can see that the error starts out being non-zero and then eventually settles down. There is an overshoot, the red curve overshoots the desired blue curve, but eventually settles back down so that the red and the blue curve coincide. And here's an experiment that demonstrates the same idea. The robot is asked to hover and in this case someone displaces the robot from the nominal hover position. And the robot fights to overcome the disturbance. Using a combination of proportional and derivative gains, the robot is able to compensate for that disturbance and then settle back down into the hover position. If you increase the value of Kp as I said earlier, the system gets more springy. So you can see that the system now overshoots. The red curve overshoots the blue step and then settles down eventually. Again, in this video you'll see the same phenomenon. The robot is hovering, but when it's displaced, when it recovers from the displaced position it overshoots and comes back to the original position. And this happens because the proportional gain has been increased. If you turn down this proportional gain, you lose the overshoot but instead you get a very soft response. And then, of course, if you turn up the derivative gain, the system becomes overdamped. So the overshoot disappears but the system takes also a longer time in order to get to the desired position. And once again, this video illustrates this. The vehicle is displaced. And it takes a longer time to get back to the original position. In order to get a feel for these different terms in the controller. Here's a simple exercise. You have a simulator of the system that you just saw. Try to play around with the two gains, K sub p and K sub v. To achieve a simple goal, which is to get a desired response in which the rise time, in other words the time taken to get to the desired position, is reasonably short. And the overshoot is kept below some modest value.