0:00

The next technique that we are going to consider is called local search.

This is also widely used in practise technique.

It is used both for search problems and for optimization problems.

It's main idea is roughly the following.

You start with some initial solution, then you iteratively do the following.

If your current solution is not good enough,

you try to change it little bit somewhere to get a new solution which is,

in some sense, better, or which fixes something.

So pictorially it looks as follows, you go from one solution to a solution which

is close to it, which results from the current solution by some small change.

Then to some other solution, then to some other,

until you get a solution which is good enough for you.

So this is a very very vague explanation of this technique.

We will now give a more concrete explanation of the local search algorithm.

So this is how a local search algorithm might look like for

the three variability problem.

Consider a formula in 3-CNF and denote, it's variables by x1, x2 and so on xn.

So it has n Boolean variables.

So a candidate solution for such a formula is just a truth assignment for

all these variables.

So in other words, this is just a vector in {0, 1} to the n, right?

Since we are going to apply local search, that is, we need to change each

candidate solution a little bit, to go to another solution in its neighborhood.

We need to define the notion of distance between two different candidate solutions.

So let alpha and beta be two such candidate solutions, that is vectors in 0,

1 to the n.

We define the distance between them as the number of bits where they differ.

So, more formally the distance we just, we just called distance.

The distance between alpha and

beta is a number of such indices i where alphai is not equal to betai.

2:28

And this Boolean assignment, we center this Boolean assignment and write your 0.

So this is alpha, this is R contains just exactly the same Boolean assignment.

The hamming ball was radius one and

center at the same Boolean assignment contains five different assignments.

So in this case, by the way, n is equal to 4.

So first of all, it contains the center of this ball.

It also contains four Boolean assignments that resolved from

our initial assignment by changing just one beat.

Waits on.

The first one we change the first bit here.

We change the second bit here.

We change the third bit here.

We change the fourth bit.

Finally, the Hayman ball with the same center and

with radius 2 contains the following assignment.

So this is an assignment at distance one.

These are four assignments at distance one.

3:32

I'm sorry, so this is an assignment at distance zero.

These are four assignments at distance one.

And this six assignments at distance exactly two.

This is a crucial Lemma for

our loca search algorithm, which checks satisfiability of a given formula.

Assume that we are given a formula F.

And together with this formula F we are given a Hamming ball.

We use center alpha and radius r for

which we know that is contains a satisfying assignment of a formula F.

Then we can find a satisfying assignment in this

ball in times 3r times the length of the formula.

This final assignment that we are going to find might coincide with be or

might be a different one.

But in any case,

if we know that this cannon ball contains a satisfying assignment,

then we will find some satisfying assignment in this running time.

This is how to do this.

We first check whether the assignment alpha,

namely the center of our current ball, satisfies the formula F.

5:09

Let's then do the following.

Let's consider three different assignments: alpha i, alpha j,

and alpha k.

Alpha i differs from alpha only in the i speed.

Alpha j differs from alpha only in the j speed.

Finally, alpha k results from alpha by flipping the k speed.

What we know about these three assignments is that at least one of

them is closer to beta, right?

So we don't know which one, but we know for

sure that at least one of them is closer to beta then alpha, why is that?

Because beta satisfies the currently considerate clause.

So in beta, either x i is equal to 1, or x j is equal to 0, or x k is equal to 1.

So in at least Alpha I, or Alpha J, or Alpha K, we fix at the right bit.

The ones that have the same value in Beta.

So at least one of them is closer to Beta.

Probably two of them, or probably three of them, but it doesn't matter at this point.

What is important is that at least one of them is closer.

So we start with Alpha and then we consider three different assignments.

Alpha i, alpha j, alpha k.

For each of them we make the same step.

For each of them we also consider three different assignments.

And we always know that, at least in one of the branches we get closer to beta.

Since we know that the distance between alpha and

beta is at most r, just because beta lies in the hamming ball.

We center alpha we conclude that in the corresponding algorithms,

there are at most 3 to the r recursive calls.

7:04

We know that somewhere in this ball, there is a satisfying assignment beta.

We don't know which one.

Of course we're only looking for a satisfying assignment but

I assume that there is a satisfying assignment, beta in this case.

Then at each step we do the following.

We select an unsatisfied clause and from alpha we move to one of, we try

all these three satisfying assignments that differ from alpha in just one bit.

We don't know in which direction we need to go but

we know that in at least one of these directions we get closer to beta.

Then for all these three assignments we repeat this procedure and in turn for

all of them we again repeat this procedure.

So eventually, if we make r steps, we will find an assignment beta, or

we will stop before this if we find another satisfying assignment.

The crucial observation is that to eventually find beta,

we need at most three to the r recursive calls.

This is just because the distance between alpha and beta is at most r, and

at each iteration in at least one of these three branches we get closer to beta.

This is of the resulting algorithm.

It is given a formula F, also an assignment alpha.

And in r.

And then it checks whether it tries to find a satisfying assignment in

a ball with center alpha and radius r.

We first check whether alpha satisfies the formula F.

If it does, we return alpha immediately.

Then we check whether the is equal to zero.

If it is equal to zero we return that ball.

Ball does not contain a satisfying assignment.

So we return not found.

So recall that if r is equal to zero then the responding ball contains

the assignment alpha itself.

Then we take any unsatisfied clause and we denote by xi, xj, and xk,

the variables from this clause.

Then we construct three different satisfying assignments.

Alpha i, alpha j, or alpha k.

9:19

More precisely we first make the first recursive call.

We with parameters F, alpha i and r minus 1.

If it finds a satisfying assignment we immediately return.

If not, we make a second recursive call for F, alpha j and r minus one.

If it finds a satisfying assignment, we return immediately.

If not, we make a short recursive call.

So if none of them find a satisfying assignment we return not found.], okay?

Now we need to show how to use this procedure

which checks whether a bowl containing a satisfying assignment

to check whether the given formula is satisfied.

Well, assume that our initial formula, our formula F, is satisfiable,

and the reason it's satisfying assignment beta Well, there is a simple observation.

Assume that there are more 1's than 0's in the assignment beta,

then I claim that it's distance from all 1's assignment is, at most, n over 2.

Well, this is clear.

If there are more ones than zeros.

Then the number of ones in this assignment is at least n over 2.

Which means that the distance to the all-1's assignment is at most n/2.

Otherwise, the distance of this assignment to the all zero assignment,

that is an assignment which assigns zeros to all the variables is also at most n/2.

This means that to check the satisfiability of the formula F,

it is enough to check whether any of two huge balls contain

a satisfying assignment.

Once again, if there is a satisfying assignment beta of a formula F, and

it is contain It is contained either in a ball of radius n/2 with

center in all-1's assignment or

in a ball with radius n/2 whose center is in all-0's assignment.

So if none of these recursive calls finds a satisfying assignment,

then the initial formula is unsatisfiable for sure.

The running time of this algorithm is O of the size of the formula times 3

to the n/2.

This is just because we will make two calls to check for

two balls of radius n/2 and

each of these calls takes time roughly 3 to the n/2.

So this is roughly 1.733 to the n so

this is still exponential, still not acceptable in practice.

However, know that the same time that this has exponentially faster than a brute

force search algorithm.

because a brute force search algorithm runs in time 2 to the n and

algorithm runs in time 1.733 to the n This is roughly 1.15 to the n.

So it is exponentially faster.

Already for n = 100, for example, our algorithm is roughly

1 million times faster than the brute force algorithm.

So this algorithm works well in theory.

And local search heuristics and local search ideas but

slightly different form are also widely used in modern set holders.