Welcome to the module summary of module four of course number three. Jimmy, what's happening in the story? >> Well, after Ne Zha, Zhuge Liang had to fix another problem that was caused by him. And this time, it's to help Wu kong to borrow the magical fan from Princess Ironfan to fan off the fire on flaming mountain, so that his master could go past the mountain. Well, when I say borrow, actually we're going to have to have several big fights with the princess. Eventually, he managed to get the magical fan and then he was able to put out all the spot fires on the flaming mountain. Now on the technical side, I realise that we are teaching Wu kong an entire new way of searching to solve the problems that he encountered. >> That's right, we're looking at now an entirely different sort of solving technology for discrete optimization problems, and that's local search. So local search is all about having a current state, where we are now, and moving around. So we have these concepts of the move, so I'm in a state now, and I move to another state.. I have a neighbourhood, that's the places that I'm looking to where I could move to from this state. And then we looked at basic, greedy sort of local search where I just look around all of my neighbours and just move to any one which is better. Or I can do steepest safe neighbourhood search, where I look at all the ones in my neighbourhood and I find the one which gives me the best possible value and go there. >> Well, the kind of local search algorithms that you're talking about is very different from the complete search strategies that we have been talking about, right? >> Absolutely, so local search algorithm is never really going to prove that it's found the best possible solution. It's only going to basically find you as good a solution as you can in the results as you had to give it. >> Mm-hm, so in local search we are essentially jumping from one point to another on the search base. Whereas in complete search, we are kind of rolling the evaluation. >> Yes, we're building out towards finding a solution to every variable, quite a different way of thinking about a search. And local search really has two main problems that we have to think about. It's constrained, so how do we satisfy those constraints? And there's these local minima, so when I'm stuck in a place where all of my neighbours look worse than me, what should I do? >> So how do we deal with contsraints? >> So there's two ways of dealing with constraints. We can deal with them implicitly, so if they're simple enough we can make sure that every move we do maintains those constraints. >> Okay, is it possible to deal with constraints like this all the time? >> No, because often, constraints will be too complicated and we won't be able to ensure that every move we do maintains the constraints. And then what you're basically going to do is convert those constraints into penalties. So we're going to actually say, we're going to allow you to violate the constraints, but you're going to pay a penalty. And eventually to get the best solution, you might want to pay that penalty, and eventually you'll satisfy the constraints. >> Mm-hm, so a big problem with using this penalty approach is to decide how big the penalty is for each of the constraints. >> It is, when you're streaming how much penalty each constraint needs is a difficult problem, but we'll look at methods to do that. But the other thing we'll do first is, we looked at three different ways of escaping local minima. >> Wow, my understanding is that every successful local search algorithm must have an effective way of escaping from local minima. >> At least one. >> At least one so- >> So the simplest one, of course, is just restart, we're just going to start the search from the beginning, and hope we don't end up in the same spot. >> So this is stochastic in nature? >> Exactly, almost all local searches are random in nature, they ought to do that. Okay, the second possibility is simulated annealing which is basically a way of saying, sometimes we're allowed to jump up. And so that means that if we're stuck in a local minima, when we're allowed to jump up, we can move to a different place and then we can find hopefully a better solution. >> But the probability of this so called sometimes, it would decrease gradually. >> Absolutely, as we go down through the search, we want to have less and less chance of jumping up. Because we want to be more and more sure that we end up in a good local minimum, as close to possible as the global minimum. >> And last way of escaping from local minimum is actually to avoid going into local minimum, right, by using a tabu list. >> Yes, or you can think about it as filling up the local minimum. So because I'm in local minimum, then it would be a tabu, I won't be able to go back into it. So I'll leep moving around, and hopefully that gives me enough freedom to push me out of local minimum and find somewhere else. And it's particularly useful if we have these sort of plateaus, where we have lots and lots of solutions of equal value. We need to be able to move around them without sort of just going back and forth and back and forth. >> And tabu lists can really prevent cycling >> Exactly >> So we can go back talking about constraints. >> Exactly, so we said already that it was difficult to find the right penalty values to use for those constraints. So Lagrange multipliers will give a kind of principled way to looking at what's the right penalty for each constraint. >> So the idea of this method is when we have an objective function to optimize, we do not just optimize this objective function. We combine it with the penalties associated with each of the constraints in the problems as well. So we have a new landscape, a new objective function to optimize. >> Exactly, and the key thing is that whenever we find a constraint is difficult to solve, its multiplier or its penalty will get more and more high. Which will change the landscape to make it more and more unattractive to not satisfy that constraint. And so we'll end up forcing it to be satisfied, to satisfy that constraint. >> So in some sense, we do not have to set the penalties in advance. But we learn the importance of each of the constraints gradually during search. >> Absolutely right, then the very last thing is sort of a complete jump back to things we've seen before. So now we look at large neighbourhood search, so now we're going to be able to do in a local search, we're going to look at a huge number of possibilities in our neighbourhood. And that means it's a problem that we've already looked at before. >> Wow, previously, we were looking at small neighbourhoods. So that we could afford to examine each of the neighbouring points, one by one, look at the objective value, and then select the best one, right? But when we have a large neighbourhood, there are too many search points. We can afford to do that, so how can we find the best neighbour? >> Well, this is a discrete optimization problem, which this course is all about. So we can use all of those complete methods we've already looked at for solving this large neighbourhood search problem. And in fact, that combination of using a complete method on a large neighbourhood is a very, very powerful approach to using complete methods for problems where they don't scale. >> Exactly, and we can also see the power of such a method when it's combined with restart as well. >> Absolutely, so restarts in large neighbourhood search give us a very, very powerful way of tackling all kinds of problems. >> How about our final workshop? >> So our final workshop, and well, you can tell us about the story, Jimmy. >> Yeah, it's about Wu kong as well, he has to fan off the flames from the flaming mountains. But then, there is a village that's nearby, so he has to ask for help from Ne Zha to send his magical birds up to go and inform the villagers. So that they could get prepared to fight off possible fires that fly over to the villages. But it turns out that this has become a very interesting, complex, and difficult routing problem. >> Absolutely, and so routing problems are one of the most common uses of large neighbourhood search, one of the most powerful uses of large neighbourhood search. So you're going to be building a routing problem, a kind of interesting routing problem. And then we want you to explore how to best search that, making use of restarts, live neighbourhood search, search strategies, all of the tricks that we've seen through this course. >> For a change, our learners will have to build a model again. >> Exactly, well, it's the last workshop, we have to make you work. And that brings us to the end of- >>Peter, not the end yet. >> No? >> I realize that today, you're wearing a different t-shirt with a character that has never appeared in one of our modules. >> So what can we tell our viewers about this character? >> Well, they will see. >> [LAUGH]