Wu kong eventually found the entrance to the cave where he met Princess Iron Fan. He persuaded the princess to lend him her fan, yet the princess refused to hand it over due to a previous conflict between Wu kong and her child. After several attempts at persuading the princess, Wu kong became frustrated and angry. He then decided to fight her for the fan. The princess, however, was so powerful that Wu kong could not defeat her. Knowing this was the case, he cut off some of his hair and magically turned this into six monkey fighters who could assist him in his battle. Again, he was still unable to defeat the princess. Watching from afar, Zhuge Liang observed that a well-designed lineup of monkey fighters could help. And he therefore summoned the dragon of dimensions for assistance from the professors. So, Wu kong managed to find Princess Iron Fan but unfortunately, there's some history between these two people and she wasn't that pleased to see him. So, they immediately started to battle. And he decided he would call a monkey army because he wasn't doing that well in that battle. So, in order to build his monkey army into the most effective fighting force, he needed to work out the formation of how he organized his monkeys in rows. And basically, it's a formation of that cooperation. So, the yellow monkey next to the brown monkey gives a positive advantage of four, whereas the orange monkey and the brown monkey next to each other give a negative five. So, this cooperation measurement between two adjacent monkeys. And we've got this overall cooperation matrix between the six different monkeys. Okay, so this is our monkey formation problem. We're trying to line up a group of monkeys in order to maximize the total cooperation. We have this array of the cooperation between each of them and then, basically, we've got to find a position, and put the monkey in that position. And, obviously, every monkey has to be different, have to be in a different position. And we're just trying to maximize this cooperation. If we sum across the positions looking at monkey i and monkey i plus 1, and how cooperative they are. So, for example, we have these six monkeys and here's the monkey matrix cooperation. So, how can Wu kong find a good order using local search? So, here is a problem where we now have constraints. It's the first time we've seen a local search from where we now have constraints. So, we need to think about how are we going to handle constraints in local search. And there's basically two methods. So, we can handle them implicitly. So, what this is, is we'll ensure the initial state satisfies the constraints, and we'll make sure that every move we do maintains those constraints. So, basically, in this sense the constraints are always satisfied. And so that's good because then the constraints are handled. It's not very easy and it's not very general, because we basically have to be able to look at moves and maintain those constraint satisfaction, we have to find the initial state which satisfies the constraints. So, we need these very tailored neighborhoods which make sure constraints remain satisfied. There's another way which is to treat the constraints as soft. So basically, we're going to treat constraint violations as penalties in the objective. And the good thing about this is it's always applicable, so we can always take a problem and basically soften the constraints and just make them penalties and add them into the objective. The danger is that we actually never end up satisfying the constraints. So, we're going to look at both of these methods. So, let's think about our monkey problem. So, there's only one constraint in this problem and that is the all different of monkeys. So, this is a very easy constraint to satisfy in the initial state. If we just give a random order of the monkeys, and clearly, all of them are different and we've satisfied that in the initial state. And then there're some moves which are very clearly going to maintain this constraint to be satisfied. So, if we take two monkeys' positions and we swap them, then, again, they're going to be all different, because we're not going to add two copies of the same monkey. And so we can think about a neighborhood in this version of the local search problem is all possible swaps from the current state. So, we just swap two monkeys in the current state, and that's a possible move, and it's going to maintain this constraint at all times. So, if we start off with this initial state, just say A, B, C, D, E, F, we can work out the objectives. So there's basically a cooperation of one between these two, minus two between these two, etc, for a total of minus six. And then, we can think about swapping, okay? If we swap C and F, right? We swap the positions of those, we can look at the objective and obviously it's going to change around F and C. In this case, it goes up to four, so obviously we're going to keep that. That's a move that's improving, we're going to keep it. Then, we can pick another pair to swap. Say we swapped D and A, and again we find that improves the objective, so we'll keep that. Then we'll look at swapping F and B. That reduces the objectives, so we are going to reject that move. And another way we could do this is steepest descent. All right? So, that was just greedy local search. I was just making a move, seeing if it was better and keeping it. So, in steepest descent, of course, I'm just going to look at all possible moves. So, these are all possible swaps of two monkeys, and I'm going to keep the best ones. So here, you can see the best one was swapping C and B. So, that's what I'll do. I'll swap C and B, my objective goes up to four. And then I can look at all possible two swaps from this position, and I find my best one here is between swapping F and C. I swap them. Now, again, look at all possible swaps from this position. I can find that swapping B and A here gives me the best moves, so I do that. And then I'll look at all possible swaps from this position. And in this case, the best possible swap, which is swapping A and B, actually reduces the objective value, so I'm going to stop. All right, so by the time we got to the end the best move is now worsening. Right? So, actually, if we took it, we'd go back to the previous state, and then obviously have some cyclic behavior, so that's a problem. So, if we look at the solution, it looks like, well that's the best we could do. But, actually, we've only found a local maximum, because there's another solution, E, D, B, F, A, C, which actually has a better objective, 15. All right? So, one of the things we have to be careful about all the time in local search is that we're going to end up in local optima. All right? So, there's no move from this current position which improves the objective, but that doesn't mean that's the optimal solution. It only means it's locally optimal. It only means it's the best within the neighborhood that it has. All right. So, when we start looking at constraints, the more we restrict the moves, the smaller the neighborhood and more rigid it is, and basically, the more chance we get local optima. The search space becomes less and less connected because there's less possible moves we can do, and that means there's more local optima. And there's other searches of local optima of course, because the whole thing is non-convex, the search space, so there's always going to be local optima. But when we handle constraints this way where we are basically restricting the amount of moves we can do, we can expect to have more local optima. And so we're going to have to combine this with effective techniques from escaping from local optima. Although that's going to be true for any local search algorithm. It's going to be more important if we try to solve our constraints in this way. All right. So, the other way is to treat constraints as penalties. Right? So as penalties, they just become part of the objective. And this is a simple method, okay? The simplest method is we just pay a fixed penalty if a constraint is not satisfied. So, that's one way of doing it. So we're basically just counting basically the number constraints that are not satisfied. A better method is to have a penalty which is multiplied by how far from satisfied the constraint is, this is kind of degree of violation. And you can think about it for alldifferent. So we may not be able to satisfy the alldifferent constraint, but we can count the number of pairs that have the same values. And basically, that tells us how many things would have to change before we could get to a solution of alldifferent. So, this is going to give us a finer degree of understanding the constraint, and so it's going to likely make the moves better. So, even though I don't satisfy the constraint I still might be able would make it almost closer to satisfied, and that's going to help my search actually get to the point where it finally does satisfy the constraint. All right. So, let's try this penalty method with a steepest descent search. So, when we have our initial solution that we start off before, we're going to think of a neighborhood of changing one position. So, this is going to be penalized, basically, because in order for this monkey to be in two places at once, they're going to have to split. All right, so there's going to be a cost to that. So, basically, we're going to have a penalty of a thousand times a number of times, we have to split our monkeys into multiple parts, number of violations. So, if we think about this, then if we split and replace this monkey here by a copy of monkey A, then I'm going to have one violation because A occurs twice. And so if I look at what happens to my objective, well it's gone up, but this penalty has overwhelmed that, and so, overall, the whole thing has come down. And if we think about this problem, if we start from this, which is a solution, then if I just change one monkey, then I'm definitely going to have two of one monkey. And so I'll always violate the constraint, I'll always pay the splitting penalty, and then, basically, my start state must be a local maxima. All right? So, the problem here is, actually, the constraint penalty is too high. We can't actually even move because any move is going to violate the constraint and so, we can't get out of the initial state. So really, this is no good. All right, let's change the penalties just to be one times the number of violations. So now, I can actually do this move. I change the third monkey to A, obviously I get a violation of one because there's one person who's repeated, but I've actually improved my objective, and so I can keep this move. All right? Then if I keep doing this, I could change this one to one as well. I can improve my penalty. I can change this second one to D now. I'm getting a better and better objective. My penalty still going up. And I change this one to B, and now I get to this position here. I could have reduced the penalty here by removing one of the As that was making a copy, and I'm now actually not improving. So, the problem you can see here is, really, the penalty was so low that the search wasn't really trying to satisfy the constraint. It was just trying to get the best objective it could. It's done a very good job in terms of the objective part, but it wasn't really trying to concentrate on the penalty. So, the penalty was too low to make it actually try to find a solution to the problem. So, we haven't found a solution to our monkey ordering problem here. So, we have this quandary when we a constraints as penalties. If the penalties are too high, you can get stuck in local optima since you just can't move without violating more constraints. And if the penalties are too low, you can basically ignore the constraints and basically just look for a better objective value by ignoring them. All right. So how do we handle that? Well, we're going to come back and look at more general ways of doing that later on. But we've seen two ways of handling constraints in local search. You can do these implicit constraints which are always satisfied. So this is only possible if we have reasonably simple constraints, where we can work out how to keep them always satisfied. It's going to cause the landscape to have more local optima, so we're going to have to be careful about escaping local optima. But the good thing is, you always have a solution. So, you always have a solution at all points during the search and at least you could stop at any time. It'll at least return some solution. So, the other way is soft constraints as penalty functions. It's a very general approach. So, we can apply it to any problem with any sets of constraints. But it may not ever find a solution because, of course, it's only trying to run this trade-off of penalty versus objective and we have to be careful to make that trade-off work so, eventually, it will drive to find a solution. And notice, we can actually use the two methods together. We can keep some of our constraints implicit and we can use penalties for the other constraints. So, imagine we added another constraint to our problem, we could still keep the alldifferent as implicit and use the other constraint as a penalty function.