0:02

So let's wrap up and sort of look at that, when we want to use local search for

MiniZinc.

So by far, the easiest way to use local search on a MiniZinc model is to use

large neighbourhood search.

So then we're basically reusing our constraint programming backends to solve

the problem and the local search come from this large neighbourhood.

So basically just to use CP search in small places, but

we can actually use directly local search solvers in MiniZinc.

So there is a number that are supported.

So OSCAR-CBLS, which you can get from here.

And Yuck, these are both local search solvers that work directly from MiniZinc

and there's some hybrid solvers.

So iZplus.

So this is an internal commercial solver.

So you can't get all of it, I'm afraid.

But it's a sort of combined CP and local search solver.

1:23

So there's basically three ways we can handle constraints in local search.

And really, the first two ways what we called implicit before.

So we have implicit constraints which are always satisfied during search,

then we have one-way constraints which is actually just another way to think

about implicit constraints and this is where we use them to evaluate expressions.

So they're always satisfied during search.

Because basically, it's a functional constraint which just says,

if you fix these values,

you can work out the other value which is the result of this functional evaluation.

So this is why it's one-way.

You can always if you fix these values, you can get this value.

But if you fix this value, you might not be able to get these values.

And finally, of course, there's the most general way which is

to treat the constraints as penalties and the objective of

course can make use is typically make use of these one-way constraints.

So the objective can also make use of this,

not just the constraints can be defined with these one-way expressions.

So the handling of constraints also sort of defines the handling of

the variable both.

So if you think the variable in the implicit constraints,

so these are your search variable for your local search.

And typically, they are going to appear in one implicit constraint.

Because what we are going to have is a special method of move for

that implicit constraint and we're not going to allow, and

appears left-hand side for one way constraint.

They're not going to be defined by another expression.

So they're basically, the actual things we're searching on and

this restriction means that we can actually maintain this.

The tricky part remember with implicit constraints is that every move has to

maintain that the implicit constraints are satisfied.

So then we have the left-hand side values in the one-way constraints, so

you can think of them as just expression of valuation.

So you'll have an expression over here, the left-hand side and

we'll just work out what value they take.

And obviously,

they can only appear on the left-hand side of one of these one-way constraints.

If they appear on the left-hand side of two,

then we have to make sure it gets the same value from both.

So there basically has to be a partial order amongst these variables, so

that we make sure that we evaluate these ones.

We generate this one's value and then we've got more variables we can evaluate

to get the left-hand side value of this.

And finally, we have our soft constraints in there.

The variables are unrestricted,

because we're just going to treat them as penalties.

3:44

So if we think about our monkey problem.

So our monkey problem is actually flattened to something not so obvious.

So here's our monkeys, our six different monkeys and

there's an alldifferent constraint on the monkeys.

But then this is an equations defining the objectives.

So basically, this is summing up all of the costs of the monkeys

are next to each other and there's an index calculation.

So here, we're defining a variable here which is this 6 times monkey of

1 plus money of 2 minus 6 which is actually the index calculation to look

up how much monkey one and monkey two cooperate.

So we define this variable I8 using this linear equation here and

then we actually use it.

So we look up the cooperative array.

And of course, remember, two-dimensional cooperative arrays become one dimensional

in flat zinc to get the actual cost.

And that's the cost we are using here when we're summing up the cooperative cost to

get the most corporation.

So you can see, these are examples of one way constraints.

Monkey one and monkey two are going to be decision variables that we will search on.

Whatever value they take, we can take this I8.

Whatever value I8 takes, we can work out this I9 and

we can look here's J11 which is again, this term here.

That's the next corporation value.

And again, there is in fact, two equations.

But it's basically working out this expression.

So you can see here on this left-hand side has these

implicit constraints where we just evaluate to get an expression.

We evaluate all those expressions and then we can evaluate the objective from that.

So we saw that one way constraints arise, because flattening introduces these

intermediate variables and most of them are functionally defined by constraints.

So they are very naturally handled by these one-way constraints.

And in fact, the flattening has defined our annotations.

So we can keep track of what’s going on as we introduce these new local variables and

note that the graph of these one-way constraints has to be acyclic.

So basically, we basically have to have a tree of expressions which keep

evaluating until we finally get to the top.

5:52

So a key component of a local search solver is actually

a specialized one-way constraint solver to be able to

evaluate these one-way constraints as quickly as possible.

And it basically allows it to officially resolve the one-way constraints after

we've done a neighborhood move.

So we do a neighborhood move.

We evaluate these one-way constraints.

And eventually, what will pop out of this is the objective and any penalties.

Everything will be done by these one-way constraints including penalties and

that allows us to see whether this move should be taken or not.

So implicit constraints, of course are handled the same way as they were before.

We need a method to generate an initial solution.

Because remember, implicit constraints are always going to be satisfied.

So we need a method to general an initial solution, which actually satisfied

the constraints and we need a neighborhood for that constraint which guarantees that

all the moves are still solutions of the implicit constraint.

And that's why we have this restriction that each variable

is a part of at most one implicit constraint and that means that, that constraint

defines how to build an initial solution for

that variable, and how to do moves on that variable.

It doesn't appear in an implicit constraint.

So if we think about alldifferent, this is the implicit constraint we saw before.

Obviously, initial random assignment respecting the alldifferent values will

give us a solution.

And then if we swap the values of two positions,

that'll make sure that we still satisfy the alldifferent.

Or if we have, we can reassign a position to an unused value.

Of course, this doesn't happen in the monkey example.

Because we had six positions and six values, so

we never had an unused value to use.

But in general, if we had six monkeys, six positions and eight different monkeys,

then there would be monkeys we weren't using which we could use to put into

a new position.

So that's one example, it's the example we saw used in the monkey example.

Global_cardinality_low_up, we can think about a way of satisfying that.

So we'll think of building a random assignment, which respects the domains and

the cardinality bounds.

Again, if we swap the value at two positions,

we'll certainly maintain this global cardinality constraint or

you can reassign to a new value as long as it respects the cardinality bounds.

So you can see that for each of these kind of global restraints,

we can see a way of building initial solution and

some moves which will allow us to maintain the constraints still holds.

Let's look at some more complicated ones.

So a circuit, remember is basically a constraint which says that the next place

I go to is a different node.

So you can think of it as an array of nodes, but

you have to point it to the next one.

And in circuit, then we need an initial circuit and

easiest way to do that is just to point every node at the one after it.

And then when you get to the nth one, you point it back to the first one.

So we'll get an initial circuit and then how do we do moves in circuit?

Well, typically, the way we do it is remove a vertex from the circuit and

insert it back in another point.

So there's particular kind of moves and that'll still maintain a circuit, but

it'll change the order.

8:56

So the subcircuit constraint looks the same as circuit,

except that we don't necessarily have to use every vertex.

So we can do the starting just the same as the circuit.

because obviously, a circuit is a subcircuit.

So we can use the same starting point where we just point everything around in

a big circle, but we've got a new kind of move.

Because again, we can remove a vertex from the circuit and add it at another point.

So that's what we could do with circuit already or we could remove a vertex and

point it at itself.

So at subcircuit, if you point at yourself,

you're not actually in the circuit.

So basically, that allows a different kind of move where we set next[i]=i.

And the other thing we could do is of course,

if we have a currently remove vertex, a vertex which points at itself and

add it somewhere back into the circuit.

So subcircuit basically is a more complicated version of circuit,

because it adds new kinds of moves.

But everything else is the same.

9:48

Of course, finally, some constraints will be treated as penalty functions.

And ideally, to do this,

we want to unflatten the constraints as much as possible.

Basically, we want to build an expression tree which evaluates the penalty.

So if you think about what happens for this constraints here which was not

an easy one to think about building a neighborhood to treat as implicit,

then it actually gets created into two linear reified equalities array_bool_or.

But really, only the last constraint is soft.

And so only the disjunction that we have to keep true.

So we can actually just keep these as left hand side definitions.

So this defines it's value I3 and this one variable I6.

And we can soften the last constraint into this penalty which is basically saying,

well, at least one of them should be true.

So the penalty is 1-I3, 1-I6.

So you can see, if both of them take the value 0, then we'll get a penalty of 1.

Otherwise, we'll get a penalty of zero.

In other words, the constraint is satisfied.

So that's not very good.

Because really,

we're only keeping track of whether this constraint is solved or not.

And remember, we've talked about penalties.

We said, when we're doing a penalty, what we want to do is make sure that

we're sort of describing the amount of violation of the constraint.

So a better way of mapping this to soft constraints is to build penalty functions

for each of the subconstraints and then join them up.

So here, we can build a penalty constraint for this x+-y+6, which is this max(0,

y=x+6).

So obviously, if the constraint is satisfied, this will be negative.

And so we'd just get zero if the constraints is unsatisfied,

then this will take a positive value and that would be the penalty.

And you can see the more unsatisfied that is, the worst the penalty is.

We can do the same for this other constraints here and then we can just take

the minimum for two penalties, because this is an all constraints.

So we really want to say, okay,

it's almost satisfied we take the closest to satisfy.

That's how much penalty we should pay and notice that's much better,

because the penalty now records the actual amount of violation of the constraint.

And this is what native constraint-based local search solvers do, but

it's not currently done by the MiniZinc local search solvers.

So it actually weakens them quite substantially.

For global constraints then, we really want this penalties times multiplier.

So this global violation we actually want to add the Lagrangian penalty term.

We've seen the Lagrangian purchase to penalization very robust.

Because basically, if we keep finding that its constraint is hard to satisfy,

it gets more and more important and we just take this global violation.

So basically, these penalties times the multiples to give the total,

add it to the objective and that gives us the local search objective.

So local search, it basically splits the variable into three categories.

So there is this left-hand side of the one-way constraints.

So typically, most of the local variables on you introduce variables

introduced to evaluate and expression and

that will be handled by this one-way constraint solving.

There'll be other constraints which appear an implicit constraint.

So often,

these are the actual original search variables from the original problem.

And we hope that we have an implicit constraint which gives us a neighborhood

and a way of changing the values while still maintaining that constraint, and

there'll be another set.

So basically, variables which appear in constraints which we

don't treat as implicit and we just have to work around them.

Basically, they become just search variables.

Now those are just given random values,

because we have no implicit constraint to give them an initial veying.

The implicit vars are given a value from the implicit constraint,

which controls them.

And then we can just calculate the initial values for these expression vars,

those ones defined by the one-way constraint.

And then search is just picking an implicit constraint and doing a move for

that implicit constraint or picking an independent variable,

and just doing a move on that independent variable.

So it's just typically just giving it another value in its domain.

We do the appropriate move, then we have to evaluate all these

one-way constraints to find out the new penalty and decide what to do.

14:01

So this generic local search is quite a difficult thing to

make it robust across many models.

And if we look inside OSCAR a bit, you'll find that what they do is they do

an initial greedy improvement of a global penalty.

So they're basically looking around and trying to get better and better penalties,

then they ask each neighborhood to find an improving move.

So they basically look at every one of their implicit constraints and

see if they can find an improving move.

If they can, they choose randomly one of those and

then the change variables become taboo for some time.

So that means that we don't start to go back and try and do the same thing and

they periodically update in the case of OSCAR, a single Lagrange multiplier.

Although obviously, they might be able to do better with multiple

Lagrange multipliers keeping track of the difficulty of solving constraints

individually rather than as a single thing.

But finding a generic local search that's robust from many,

many models is a very tricky thing.

Of course, you can use OSCAR anyways where you control all this an you can tune

that for your particular model.

So in the monkey example, what happens?

The alldifferent is the implicit constraint and

all the other constrains are one-away.

In that case, all the constrains are maintained in the monkey example.

Just to summarize local search for MiniZinc.

So local search solvers for MiniZinc need to decide how you treat each constraint

and they basically have this three different ways.

They need to decide how to search for good solutions.

And these local search offers can be very effective when the model has some main

globals, which can be handled as implicit constraint.

So much of the moves will maintain the important constraints of the problem or

if the globals have a very specialized soft constraints.

So if the globals have a specialized and quick way of evaluating penalty,

that often means it works well and one thing that

local search is very good at and is that it's scaled to larger problems.

If we had problems which are mainly not so

difficult in the constraint terms, but difficult in the objectives.

So very large search space and hard to find the global optimer,

then local search scales better than any of the complete search methods.

So local search solvers for MiniZinc are improving.

There's a lot of work remains to be done, but

I would expect in the next ten years that this would change drastically, so

that local search solvers will be much stronger than they currently are.