0:00

Okay. So welcome back.

MIP part two, and so what are we going to do today is look at modeling techniques

for MIP. Okay.

MIP models and so in a sense, I want to go back to what is a good MIP model.

We already touched upon it. Okay but, I want to go into more details,

and then I'm going to tell you about some basic modeling techniques that I use in

MIP. And the reason that I want to talk about

that is there are things typically that we want to model, but they are not linear

constraints. So, how we are going to model them as

linear constraints is interesting, okay? So, so, I told you before that, you know,

essentially we are using branch and bound here for solving a MIP.

And a lot of the techniques for solving MIPs are going to be based on, you know

refinements on branch and bound. And we'll talk about that in the next

couple of lectures. But essentially, the key idea is branch

and bound, and we need to prune sub-optimal solution.

And a necessary condition for everything in effective branching model is that

their relaxation has to be precise. It has to be, a really good approximation

of the value of the, MIP model optimality.

Okay, so we want to have that. Okay?

So, so a good model, MIP model is going to be a model with a good linear

relaxations. And therefore when we are building this

model, in a sense we have to take into account that, behind the scene we

going to serve it to the linear relaxation.

And we want these linear relaxation to be strong.

Okay? So, in warehouse location, we had this

model, right? So and I told you, you know, this was one

of the models that we could have written, okay?

And they may be many, okay? So once again, you know we have to open

these warehouses, this side, which, which, which, which warehouses to open.

And then which ware, which warehouse would serve which customer.

Okay and so in a sense one of the so assume that we stick to the decision

variables. We like them but no we want to see hmm,

is there another way to express these constraints.

Okay so and we going to focus on the first one.

So we have a warehouse okay, and we know that that warehouse can only serve a

customers when its open otherwise it cannot, okay?

And remember what we did was basically having these very simple inequalities but

many of them right? Stating that you know you know YWC which

is basically saying warehouse W can serve customer C is one only if you know

warehouse W is open. Okay, so we have these linear

inequalities making sure that even warehouse is closed nobody, no customers

can be assigned to it. And if it's open, then everybody can be

assigned to it, okay? And so, essentially, that's what we did.

And, of course, we have to have many of these constraints, right?

As many as there are pairs of warehouse customers, okay?

So that's what we used last time. Can we do, can we find a formulation,

let's say, which is more concise? Okay, can we do that?

Okay? So we going to look at this and try to

replace all these constraints with fewer of them.

Okay? And what we going to do is essentially

what we want to do is for a particular for a particular warehouse.

Okay? Can we for a particular warehouse can we

actually deal with all these constraints in one step.

Okay? And so this is what we want to do.

Okay? And this is how the constraints is

going to be expressed. Okay?

So, what we are going to do, is we are basically going to sum all the customer

which can be assigned to a warehouse. Okay.

So, we take the warehouse, we look at all the customers that can be assigned and we

sum this variable, y, w, c. Okay.

And then we want to make sure that these guys become, become 0.

They become 0 if the warehouse is not open.

Okay and that's easy to do. You just put Xw okay.

On the right hand of these equations and if it's 0 then of use to sum you know

will be 0 and then all the elements of the sum are going to be 0 okay?.

So thats fine we can do that. But then we multiple this thing by the

number of customers to make sure that these guys can be assigned.

So in the worst case every one of them is going to be assigned to particular

warehouse. So if we multiply them by number of

customers we make sure that this constraint is doing exactly what we want.

Okay? So if the warehouse is open then

essentially all the customers can be assigned to it.

If the warehouse is closed then all of these YWC variables are going to be 0.

So these constraints Is kind of magically right?

okay. So it can in 1 step you know take care of

all the small linear equalities that we had okay.

And this becomes my model now essentially I have one of these constraints here okay

for every one of the warehouse. We don't have to pair customer warehouse.

This is essentially expressed by the sum over there.

Okay, so I have many fewer constraints in send but they are bigger okay.

Okay. So now ha!

I have two models, okay, and I have to decide which one is the best one, okay.

How do I do that? How do I decide that this model is better

than the other one, okay? Now we have one indication here, we are

saying oh yeah, but you know I have fewer constraints, so maybe this is good, okay.

But I told you last time, okay, a good model is a model which has a good linear

relaxation. Okay?

So what about the quality of the linear relaxation of this model compared to the

quality of the linear relaxation of the model that I've shown you last time?

Okay? So you kind of need to relate these two,

and so if one is a stronger relaxation, that's probably the one that you want to

use, unless it is a really huge number of constraints, right?

So, and what I'm going to show you is that the model is that I've shown in the

first lecture is going to have a stronger relaxation.

Now the model that I've just shown you why?

Because look at these guys okay. So if I have a solution okay?

If I have a solution to all these constraints okay.

So these guys are going to 0 and 1. Okay, I can sum all these guys right.

I can sum all these linear inequalities for a particle of W.

Okay for a particle of W I can take all these guys and sum them.

And if I sum them, what do I get? I get exactly that constraint.

5:33

So, in a sense, whatever the solution to the linear relaxation of these

constraints. When I sum them, they're going to satisfy

these constraints. So which basically means that this this,

any solutions to this is going to be a solution to that.

Okay. So, I know that this one cannot be

stronger than that one. It cannot be stronger linear relaxation

than that one. And actually, it can be a weaker

relaxation than this one. Okay.

So, can you think of a case where it's going to be a weaker relaxation?

Okay? So what is the basic difference between

these two? Okay?

So what we know here is that every one of the x, the, the y w c is always going to

be smaller than x w c, okay? And in the linear relaxation, so, so

that's the big step here. You have to think that.

The linear relaxation is not always going to be, you know, associating an integer

value to Xw or Ywc, okay? So, if these two things would be the same

obviously these variables would be 0 and 1.

Okay? But in the linear relaxation that value

might be point five, right? So what could happen in this particular

case? So when you look at these constraints.

You know, this guy is always going to be making sure that ywc is smaller than xw.

Okay. Is it the case here?

Think about it. Is it the case that I would have this

property that ywc is always going to be smaller than xw?

Okay. If you answer that question, you will

know if this is at least as strong as that.

We already know that this is at least as strong as this, okay.

And in practice you know, it's, it's easy to find a case where this is actually

weaker than that, okay? So, the initial model has a better linear

relaxation. It's going to be very likely a better MIP

model than the other one. So let me show you a very interesting

example of these two linear relaxation. So this our bench mark additional/ g

bench mark for warehouse location they are small.

But it is very interesting to look at this two linear relaxation, four- four is

going to be our bench mark. So, remember we are minimizing the cost,

right. So we want these lower boundaries, this

linear relaxation to be as high as possible.

What you see on this column, okay, is the first lin -MIP model that I have shown

you. Okay.

And it's linear relaxation. And you see the numbers over there,

right? And this is the second model.

Okay, so when we aggregate essentially the constraint.

Okay, so what you can see here is that the value of the linear relaxation here

is much weaker. Okay, so and it's substantial, right?

So we have gaps which are between 9.5 and 20% difference between these two things,

okay. So this is the difference between these

two models and their linear relaxation. You see that in one case, the case where

you basically have these aggregated, all these constraints.

They're kind of, four linear inequalities, but many of them, and when

we have only one for a particular warehouse.

You see that the difference in the relaxation is substantial, okay?

So 20% difference is huge, right? So you are 20% closer to the optimal

value. You're going to prune much more, okay?

And so this is interesting, okay, so when you have MIP model, you can look at this

relaxation, how good is this linear relaxation?

And it's going to tell you a lot about the pruning capability of your inner

relaxation and your branch and bone algorithm.

Okay? So, once again, you know, typical,

typical things you can do. Look at the relaxation, how good is it?

Can you prove [UNKNOWN] properties on them.

Can you look and practice how it does? So, essentially what you want is for

pruning effectively you need a very strong linear relaxation and a good

model. A good model is going to be a MIP model

with a very good linear relaxation. Okay.

So, let's look at another example now. We are going to look at the map coloring

example that we have seen, you know, earlier in this class.

Okay. And the reason we are going to use this

is for showing some of the modeling techniques of MIP and various modelings

and how you know, they impact the linear relaxation, okay?

So this is the, this is the coloring problem as, as we stated it in, in, in

constraint programming, okay? So we were maximizing, we were

minimizing. We're, minimizing the maximum color.

Okay? So, min max.

We want to minimize the number of colors. And then we have all these constraints,

which we're basically saying that two adjacent countries have to be given

different colors okay? And when you look at this guy, okay.

So this non-equal.. Okay.

Well that's not a linear constraint. Okay.

So you can't express it directly as a MIP.

Okay. So you have to reformulate it.

And the question is how are we going to do that.

How can we actually represent this, as one or more linear constraints.

9:51

Okay? So, essentially, x is different from y

when? When, you know, x is smaller than y, or,

you know, x is greater than y, okay? So this, and these are two linear

constraints. Okay, so you can express that as saying,

x is smaller or equal to y minus 1, or, x is greater or equal to y plus 1.

You know, both of these disjuncts, okay, they are linear constraints.

Of course we have a disjunction here and that's annoying right?

Because the MIP model doesn't support dis junctions right?

So that we have two liner constraints that are in this junction.

Can we actually you know use these two linear constraints but remove this dis

junction? Okay?

So I'm going to show you the transformation which is called the Big M

transformation. Which is used all over the place in MIP

models in general when you have things like dis junctions.

Okay? And so the key idea is to do two things.

You're going to introduce a zero one variable, which is called b, and we are

going to introduce a very large number. Okay?

A huge number. You know, take, you know, 40 billions,

okay? And so now we can realize, essentially,

the junction as two linear inequalities, okay?

And the first one is going to be what? It's going to be x, okay?

It's going to be equal to y minus 1 Plus B times this Big M, this big number.

Okay B is a 0 1 variable. It takes a value of 0 1.

And the second constraint is X is going to be greater than Y plus 1 minus 1

minus B you know times the big M okay. So essentially what I'm showing you here

is a technique to transform a dis junction of two linear constraints into a

conjunction of two linear constraints. And the trick is to introduce one more

binary variable. And then use this big M value, okay?

Now I'm going to give you the intuition of what's happening here, right?

So, essentially, when you look at these two constraints, you know, every one of

them at any one, one of them, when b is equal to either one.

One of them is going to implement one of the part of the disjuncts, okay?

So if, if you look, if you look at the case where b is equal to 1, what's

going to happen if b is equal to 1. Look at that constraint here.

Okay? You know, b times M is going to be a huge

number. Okay?

So that basically means that the constraint here is that x is going to be

smaller to, something related to y, that we really don't care.

Because we have B times this big number. So, the first constraint is going to be

always satisfied, okay. So, the really important thing is the

second constraints and now we have that B is equal to 1, right?

So, 1 minus B is 0. 1 over B times M is also 0.

So we have the constraint that x has to be greater or equal to y plus one.

Okay. So, in a sense, when b is equal to one

the right part of the dejunct has to be true.

We want to be in the case where x is greater than y.

Okay, and of course in that case it's different from y, okay?

And then the case where b is equal to 0 is just the opposite, right?

So when b is equal to 0, 1 minus b is 1 times this big number.

That's a big number, okay. We are subtracting that big, big number

from some value that y has, okay? So, that makes that, that right hand side

tiny, tiny, tiny. And of course x is going to be greater

than that always, okay? So what we are enforcing in that

particular case is the first constraint. B is equal to zero, so we have basically

x is smaller or equal to y minus one, so which basically means x is smaller than

y, okay? So in a sense this Big-M transformation

using this value, this Boolean value b and Big-M is basically going to make sure

that we can transform this b junction into a conjunction.

And essentially x different from y, it can be represented as.

X is smaller than y, or x is greater than y.

Okay? So now you have to think about what the

linear relaxation is going to do. And I told you, this linear relaxation is

a beast. It's evil.

Okay? It's always going to do what you don't

want it be done, that, that you don't want it to do.

Okay? So if you have the linear relation, so

you have to put yourself in the mind of this linear relaxation.

This nasty beast, right? So if you want to make sure that the

linear relaxation is as low as possible, okay, it's, it's satisfying as many

constraints as possible, what are you going to do?

I want to satisfy this constraint and I want to satisfy that constraint and put

as many constraints on x and y. So that I can have you know, the best

relaxation, the, while, so that I can keep these guys, you know, free.

Okay, I don't use the constraints of these guys.

What you are going to do is that, you are going to choose b is equal 0.5, okay?

So if you choose b is equal to 0.5, okay? Half of a big number is still a big

number. Half of that big number is a big number,

so essentially, these two constraints are always going to be satisfied, right?

Always! And so what happens is that it's like you

remove these constraints from the, you know, the linear relaxation.

So your linear relaxation is going to assume that it can minimize, you know,

more and more and more and more, okay? So you will have a big gap between the

MIP model and the linear relaxation. Okay?

So in a sense this relaxation here, you know is going to be really clever and try

to use the value least that you minimize the value of the linear program.

Okay? And therefore, you know, in this

particular case it's basically going to ignore these two constraints okay, which

is the worst that can happen right? So we have this beautiful constraint that

we want to actually get a very strong relaxation.

And actually the relaxation, r, r, r, r, r, r, going to go down, down, down, down.

Okay? So, so when you look at this particular,

you know, model with the big M, this is, this is, the model that, that, that we

have just described, so we minimize the objective function.

Okay. That's the number of color.

We make sure that the objective is greater than all the colors that we are

using. Okay.

So, this is the color of every variable. Okay.

So we make sure that the objective function is at least as large as the

largest color we assign. Okay.

And then we re-express all of the constraints that you know in our two

adjacent countries. Have to have different values by using

this big m transformation that I have shown you, okay?

For every two countries that are adjacent, okay?

And so this is essentially a model that we can use, using this Big-M

transformation, okay? Now, I alluded to in the previous slide,

okay? That this relaxation is probably going to

be poor, what does it do when we try to solve it?

Okay, this is explaining what I just told you, right?

So the objective is greater than the largest color, and, you know, no two

edges and countries received the same color, okay?

So this is the value of the linear relaxation, okay, at the root node, okay?

And so it's a very, very powerful relaxation, right?

So you see that the objective is 0, okay? And you know, you see that, you know, the

value of b is 0.25 on some of the, on some of the, on many of the constraints.

Okay? So what this basically telling you is

that we need at least one color. Yeah, great.

Okay? So that's a very powerful relaxation.

Okay. I'm trying to color this map and what

this linear relaxation is telling you yes, yes, yes, you need at least one

color. Thank you.

Thank you. Okay.

So what basically this tells you is that this linear relaxation is terrible,

right? This big M notation is in part

responsible to that. So, you can use this transformation.

But that doesn't mean that you're going to get a good model okay.

So the MIP when you try to solve it is going to use 65 nodes okay.

Find the optimal solution five nodes but that really depends how you branch right.

So can we find another model okay. So we had this model but of using the

linear relaxation is so bad that it tells you oh you need at least one color.

Okay, Can we find another way of modeling this is MIP which is actually better?

Okay. And so you have to remember what I told

you last time. Okay.

So MIP model like 0/1 variable. They are in love with 0/1 variables.

Okay. So can we find a model with 0/1

variables? So what we're going to do is take all the

variables in this model. All the colors.

Okay? And we're going to binarize that, okay?

What does that mean? That means that instead of assigning a

color to a country, we're going to go use a variety of binary variables.

Which are basically going to decide whether that color is assigned to that

country, and we're going to do that for all the colors.

Okay, is this country red? Is this country blue?

Is this country green? Is this country white?

Okay, so,so we basically, in this particular case, we know that it's a map,

so we need a, at most four color. So we left four binary variables, the

first one is going to, do, say. Oh!

Is you know, x, you know, equal to 0. Is x equal to 1, is x equal to 2, is x

equal to 3. Okay, and so basically the value so when

I use the variable Bx equal 1 or equal 1. That will mean that B you know X equal Y

is 1 whenever you know X is equal to I. Okay, so basically what I have is that

value of these the value of this variables is basically expression oh!

What is the value that I'm assigning to that variable okay?

And so now essentially if you use these binary variables.

So there is one more thing that you need to do.

You need to make sure that the popular variable is actually being assigned a

popular value. So you have to basically say that a sum

of these binary variables is equal to 1. If you have that what are you saying, you

are saying, okay? you are making full decision.

Is x equal to 0 or not. Is you know x equal to 1 or not.

Is x equal to 2 or not, is x equal to 3 or not.

And you want also to say that you have to make one of these decisions exactly.

One of these things variables has to be true, which basically means I have to

assign a color or value to that particular variable.

So, summarizing here what is you take an integer value, the variable has to take

an integer value you know the range of that variable.

And what you do is you replace that integer value by plenty of binary

variables. Looking at every one of these variable,

values. Okay.

And, and having a decision variable which tells you, ooh, is that variable taking

that value. Okay?

That's what we are doing here. Okay.

So in the coloring example is, you know, is this country color red?, is this

country color blue?, and so on, and so forth.

Okay. And now when you have a constraint like X

is different from Y, you know, this country has to be colored by, you know,

differently than X. The country X has to be colored

differently from country Y. Okay?

So, we can replace that by a set of beautiful linear inequalities.

Okay? So, what are we saying here?

Ooh, I don't want x to be 0 and y to be 0 at the same time.

Okay? That's a very simple constraint here.

Okay? So instead of this Big-M notation, what

you're saying, ooh! I don't want x to be 0 and y to be 0 at

the same time, okay, they have to be smaller or equal to 1.

I don't want it to be, I don't want it to be 2.

I don't want this sum to be 2. And then you do the same thing for value

1, okay, I don't want x and y to be 1 at the same time, otherwise they are equal

and I want them to be different. Okay, same thing for 2, same thing for 3.

Okay, so now, essentially, as soon as I have the 0/1 variable it's really easy to

express the constrains and that's always the case in this MIP model.

As soon as you have 0/1 all the things are becoming much simpler okay.

And then I get this beautiful model now. So it's long so I have to disappear from

the screen right. But I'm still here, I'm still here.

Okay so what is this doing is basically looking at the object fucntion first.

Okay so this is the tricky part okay. So you have to make sure now.

That the objective is greater than the largest color.

But we don't assign color to the, to the countries.

So what we have, the only decision variables that we're going to have is for

every, every country, and every color, is that country color with that color?

Okay? And that's a 0, 1 variable.

So to make sure that the objective font to make sure, to find out the minimum

value for the objective, we are going to look at all these decision variables,

right? But we also going to multiply them by the

value that we are actually that corresponds to that particular, you know,

binary variable. So we have a bit, so for instance, if

this is 3. We are basically saying the objective

function has to be greater than 3 times whether, you know, that particular

country c is assigned, you know, value 3, okay?

And we do that for every country and every color.

Okay? So basically, we are basically making

sure that the objective function is greater than the value that we assigned.

But we have to use this expression because we don't have the actual value of

the variables. We just have, you know, this kind of

binary decision yes or no, if that particular country is assigned to that

particular value. Okay?

Then we have these other constraints that we have seen previously, that basically

says that every country is to be colored with exactly one color.

Okay. So which basically means that essentially

one of these binary variables for a particular constraint's going to be 1,

okay? And then we have all these linear

inequalities, which are very simple, okay?

So we take, you know, every two constraints that are adjacent, okay, and

we take all the possible values. And then we said that, you know, the

color, the sum of the two colors, okay? The two binary variables corresponding to

the assignment of these two colors has to be smaller than or equal to 1.

Okay? They cannot be color with the same value,

okay? And that's the MIP model as well, okay?

Very different from the other one. We have very different decision

variables, very different constraints. All of them are different actually.

Okay. So how does this program behave?

Okay. beautiful.

This is what I told you before. Okay.

I let you see it because we did the slides.

So you have to appreciate it. Okay.

So so this is the linear relaxation. Okay.

So the linear relaxation is going to tell you 0.27.

Okay and seven, seven, seven, seven forever.

Right, so which basically means that this is really great, right?

So now, this linear relaxation instead of telling you that you need one color, is

going to tell you you need at least two colors, okay?

Wow, okay? And the you see the linear relaxation

vari, the, the linear relax- the, the values of the various decision variables

inside, inside the the linear relaxation. Okay so for one of the particular

countries these are the various values. Of course they're fraction are right so

we haven't yet to sign this is what the linear relaxation, it's going to always

try to find values, that minimize the objective.

Okay? And then you get these fractional values

all over the place, okay? So, in a sense, but in a sense this

model, right? Is much better than the previous one.

Okay? And we have now these values and we could

branch and we're going to branch on these values and so on.

Okay, and we are going to use in this particular case, only 12 nodes, and 20,

22 for finding the optimal solution and 22 nodes for, you know, proving

optimality. Okay?

So, better relaxation in this particular case, and we have a better and we have a

smaller search read to explore. So, what is interseting here obviously is

the linear relaxation is improved, okay? which is what we wanted.

So, the 01 molar is much better than the big M molar, and, typically, when you can

avoid this big M it's a good thing to do. Okay?

So, now, can I actually do better than this?

Okay, so, so, look at this thing, okay? So, look at these two things that, that

we have discussed before. So, essentially what we are saying in

these particular constraints is that the objective has to be greater than v times.

Color you know CV for a particular country okay?

So which basically means though if, if, if you know if if the color of country C

is V then the objective has to be greater than V because this is a 0 1 variable.

OK so know what we know is well the second constraint is telling you is that

every country is assigned to a single value.

Right? So, so can we actually do better than

this? Okay, so we know that for instance, okay,

a top, a feasible solution, so all these colors, okay.

Exactly one will be a 1 and all the other ones are going to be 0.

For a particular country, you look at all the colors, all these binary variables

there are going to be 0 except one. Okay.

So can we restate the particular constraints here to make it stronger.

Okay, for the linear relaxation. Right?

It doesn't matter the optimality but for he linear relaxation, can it be better.

Okay? And so we can take these two constraints

and try to combine them to actually get a stronger constraint for you know, bonding

the objective value. And so what we can use is we can take all

the colors here. Right?

All the colors. Okay?

And multiplying them by the value V, that's essentially the right hand side of

the constraints that we had before. But now we sum them out.

Okay? Why can't we do it?

Because, as I told you, only one of them is going to be one.

And therefore, these are going to be exactly [INAUDIBLE] you know?

When you, when you have a feasible solution.

And therefore, you know? Eh, they will really corresponds to these

constraints. So essentially, instead of doing this

aggregation in the [INAUDIBLE] location, we are trying to aggregate it.

Yes but of course we are using this constraint to aggregate here.

OK? And so now we have the constraints and we

believe that this objective of course is going to be moving up.

Right? Okay?

No and the hope here is what? So, when we look at these relaxation

here, okay so when I show you the relaxation of this thing okay.

So what you see here is that ooh! You know, we had the objective greater or

equal to 0.5 because that was the co, the largest color, okay?

But if we sum all these guys for a particular color, right?

So they sum, they sum to one obviously. So we could say oh!

But then you know, may, you know, if I, if I multiply this one by 1, this one by

2, and this one by 3, okay? I get a value which is greater than 0.5,

and that's why I want to aggregate this. Okay, so when I put these new constraints

here okay. So I believe that my linear relaxation is

going to be better okay. But, you know this linear relaxation it's

very unpredicatable it's like my saw you never which is going to do okay.

And so look at what is does okay so this is when we add this constraint instead of

the other one. Note the linear relaxation is 0.5.

Yes, it's better, okay not much better but better, okay?

But look it completely changed the value of the assignment, right?

So note the first and the two values are at 0.5, right?

And I remove, the linear relaxation, not me, right, so the linear relaxation has

removed the value for these two other, for these two other colors, okay?

So this is interesting, right? So instead of a fractional value for

these other ones now the linear relaxation found that you know to get the

minimum value I have to assign this guy to 0 and these guys to 2.5 everyone of

them. Why?

Because these guys are multiplied by two and by three so you want to keep them as

small as possible. Okay?

That's what the linear relaxation does to you.

He's always trying to find a way to get down down down.

Okay, we are minimizing right? So essentially what I've shown you here

is that in this particular case we can improve the relaxation but it always

change the value of the solution as well. Okay?

Now we need also at least two colors. What is interesting here is that when you

try, so this is, when you compare these two, you know, that this is, this is,

this is the two solutions that I showed you.

Right? When you try to solve this, this is what

you get. Okay?

So, you have nine nodes. Okay?

So you get to the optimal solution fine, faster with that, with the first

relaxation that I showed you for the binarized version of the, of the model.

But we need 41 nodes, so we need more nodes here, okay?

Though once again we have a stronger relaxation doesn't necessarily mean that

your search is going to be faster because you can branch differently.

We have a stronger relaxation in this particular case but it was solved, you

know, we use node actually. Okay, so, what I've shown you today, you

know, to summarize, is that when you look at the model, okay?

The different model, what you have to do is to try to find a model which has a

good relaxation. I've shown you different techniques,

okay, on how to actually represent constraint that are not you know directly

linear okay. And so you can binarize the variables you

can use the Big-M notation and I've shown you the influence of these things.

I've also shown you that how you express the linear constraints has an impacts on

the linearization and therefore on the efficiency of your algorithm okay.

So next we're going to look at better ways of actually strengthening these

relaxation. Okay?

And it's going to be fascinating. Okay.

Thank you.