Hello and welcome to the next module of the Advanced Algorithms and

Complexity class.

In this module, we are going to meet problems that combinations very hard.

In all previous modules, we considered many efficient algorithms for

various combinational problems.

By saying efficient, we usually mean polynomial algorithm and this is why.

Consider for algorithms shown here on the slide, because a running time

of the felt algorithm is just n is usual by n with the other size of the input.

The running time of the second algorithm is n squared, so it is for

the right timing algorithm and the third one is a cubic time algorithm and

the last one is running time 2 to the n.

So here, we have polynomial time algorithms,

three polynomial time algorithms and the last one is exponential time.

The second draw here in the table shows the maximum value of n for

which the total number of steps performed by the corresponding

algorithm stays below than 10 to the 9.

Why 10 to the 9?

Well, just because this is roughly the estimate for

the number of permutations performed by modern computers in one second.

So, we're interested in the maximum value of n for which the running time of

the corresponding algorithm stays below the one second.

It is not difficult to compute these values.

For the first algorithm, this is 10 to 9, of course.

For the second algorithm, this is 10 to 4.5 and for

the third one, it is 10 to the 3.

So polynomial time algorithms are able to handle

instances of size roughly thousand, so even millions.

While for exponential time algorithms, the maximum value for

which it performs less than 10 to the operations is roughly 30.

So, it allows us to process only very small instances.

Recall that any exponential time function grows faster than any

polynomial time function.

So for this reason,

exponential time algorithms are usually considered as impractical.

Note, however, the following theme.

Usually, for many computational problems,

the corresponding set of all candidate solutions is exponential.

Let me illustrate this with a few examples.

I assume that way given n objects since our goal is to find an optimal

permutation.

An optimal in some sense permutation of this object.

A nice way to do this would be to go through all possible,

such permutations and to select an optimal one.

The running time the corresponding algorithm,

however is going to be at least ten factorial,

because there are n factorial different permutations of n given objects.

And n factorial grows even faster than any exponential function, 2 to the n,

for example,

which means that the corresponding algorithm is going to be extremeless low.

Another example is the following.

Assume that we're given objects and we need to split them into two sets.

For example, we need to partition a set of vertices of

a graph in the two sets to find a cut.

Then again, a nice way to do this would be to go through all possible

partitions into two sets and to select an optimal one.

However, there are 2 to the n ways to split the n given objects into two sets.

And if we do this the running time of the algorithm is going to be at least 2 to

the n and we know that this is very slow.

This only allows us to handle instances of size roughly 30 in less than 1 second.

Another example is assume we need to find a minimum spanning tree in a complete

graph, that is in a graph where we have an edge between every pair of vertices.

A naive way to do this would be to go through all possible minimum

spanning trees and to select one with minimum weight.

However, the total number of spanning trees in a graph on n

vertices is n to the n-2.

Again, this grows even faster than 2 the n and

this makes the corresponding algorithm completely impractical.

So once again, in many cases, an efficient polynomial

algorithm is called efficient in particular,

because it avoids going through all possible candidate solutions,

which usually has exponential size.

In the rest of this module, we will learn that there are many computational

problems that arise frequently in practice actually for

which we don't know an efficient that is polynomial time algorithm.

For such problem, roughly the best we can do is to go naively

through all possible candidate solutions and to select the best one.

It will turn out also surprisingly that all these seemingly different problems,

well, millions of problems they are related to each other.

Namely if you construct an efficient, if you design an efficient algorithm,

a polynomial time algorithm for at least one of them, this will automatically

give you a polynomial time algorithm just for all these problems.