Hi everyone, welcome back to operations research. This is our week 6 about theory. Today we still want to talk about nonlinear programming. For a nonlinear programming, all obviously we know we basically focus on convex programs, because a general non-linear program is just too hard to be solved, so we focus on convex programs. Last time we already did that. But the thing is that the last time, there is no constraint on that time. We are still unable to deal with constraints. We were busy talking about the shape of your objective function, the shape of your feasible regions. Today we need to somehow consider constraints. When we have constraints, that thing would be very difficult. For example, suppose you are solving a problem like this. Even if it is just single-dimensional, adding one constraint is going to create you a lot of problems. Because if there is no constraint, all you need to do, is to do the first-order condition. Find the place where the first-order derivative is zero, and then you are done. But if you have a constraint, for example, here, saying that your feasible region is only this part, then your optimal solution for this minimization problem does not satisfy your first-order condition. Because at that particular point, your derivative, your slope is not zero. That means we need to somehow generalize the idea of first-order condition. Well, if you look at the first dimensional case is not so difficult, all you need to do is basically to distinguish between two kinds of points. You either talk about an interior point, or you talk about a boundary point. Interior point means if you have a feasible region and interior points here inside the feasible region, not on boundary. A boundary point lies on boundary, so if you are a fun of real analyses, probably you have heard about something like we opened a ball. We don't do that in this course. We just consider a very typical feasible region, and then just imagine our boundary of the feasible region or interior. We just use intuition is enough. The thing is that when we are talking about a boundary point, somehow there is a constraint binding there, so that's the idea of boundary. The thing is that if you have an interior point, then your first order condition still good. If you are at a boundary point, then your first-order condition is not good or it's not sufficient, is not necessary, so we need something else. Now let me just write down else. If you are able to first find all the optimal solutions and part of them is in the interior part, part of them using the boundary part, then pretty much you are done because all you need to do is to solve these parts of that part. See which one is the winner and then you're good. This is the case. But if you move from one-dimensional to uni-dimensional, the problem would be too complicated. Because then you have all kinds of constraints in a weird region, and then there will be too many boundary points with different characterizations. That it is unable, it's impossible for us to deal with all of them one-by-one to see which one is better. The whole heart for today's lecture is that, we're going to use a smart way called Lagrange relaxation to somehow move your constraints to your objective function. To make your constraint problem, unconstrained in a smart way, and that will help us to invent or to introduce the so-called KKT condition for you. The KKT condition is constrained virgin of first-order condition. In the n-dimensional case. We are going to make a formal introduction to you about that and show you how these Lagrange relaxation KKT, may help you analyze general constraint uni-dimensional optimization problem with non-linear things and show you some applications, and that's for today.