[MUSIC]

Hello, my name is Alexander Kulikov, and

together we're going to study dynamic programming.

Dynamic programming is a powerful algorithmic paradigm with lots of

applications in areas like optimization, planning,

scheduling, bioinformatics, and many others.

At contests, it is one of the most popular types of problems.

The common feature of such algorithms is that usually

a solution is easy to implement.

But at the same time it does not mean that it is also easy to find.

For this reason one needs a lot of practice to be able

to come up with dynamic programming solutions.

And this is exactly what we're going to do in this part of the class.

We start with a very soft introduction.

Fibonacci numbers are defined recursively.

F sub 0 = 0, F sub 1 = 1.

And each subsequent number is equal to the sum of two previous numbers.

Consider the following natural computational problem.

Given n, we would like to compute F sub n.

Since Fibonacci numbers are defined recursively,

it is natural to convert the recursive definition into recursive algorithm.

And this is exactly what we're going to do.

Namely, given n, we first check whether n is at most 1.

If it is, we just return n.

Otherwise, we just make two recursive calls to

compute f sub n- 1 and f sub n- 2.

And finally, we'll return the sum of the results of two recursive calls.

What is good about this algorithm is that, it is clearly correct.

Indeed it directly implements the definition of Fibonacci numbers.

What is bad about this algorithm is that it is catastrophically slow.

The slowness occurs because many computations

are repeated by these algorithm.

At the picture here, you see the three of recursive calls.

For example, to compute F sub n,

the algorithm computes F sub n- 1 and F sub n- 2.

In turn, the recursive call to F sub n-

2 is unwound into two recursive calls to

F sub n- 2 and F sub n- 3, and so on.

So it's shown on the picture F sub n- 3 is

computed at least three times in this case.

In fact, the algorithm computes F sub n as the sum of F sub n 1's.

And this is disaster because Fibonacci numbers

are known to grow exponentially fast.

In particular, F sub n is roughly equal to golden ratio to the n.

Where the golden ratio is equal to 1.618 and so on.

In particular there are over 31 digits in F sub 150 and

this makes computing this Fibonacci number

completely infeasible with more than computers.

As we have discussed already, the algorithm is slow

just because many computations are repeated.

At this point, it makes perfect sense to quote George Santayana.

Those who cannot remember the past are condemned to repeat it.

And this also suggests a fix to our previous algorithm.

Namely, we are going to remember the past.

We are going to save all our intermediate computations, okay?

The trick is known as memoization.

Here on the slide you see the new code.

Here we first create a table T.

In this Python code, this is just a dictionary.

Then to compute a Fibonacci number recursively, we do the following.

We first check whether F sub n is already stored in the table.

We only start computing it if it is not in the table yet.

To compute it as usual we first check whether n is at most 1,

and if it is, we just save the value n in the table.

Otherwise, we make two recursive calls.

And then compute the sum of the results and

store the resulting value in the table, okay?

So at this point, you are probably surprised.

Why do we need all this fancy stuff, like memorization, dictionaries,

and recursion for solving such a simple computational task?

And you're absolutely right,

in fact that we can solve this problem in a straight forward way.

Indeed, this is how one would compute, for

example, the fifth Fibonacci number on a piece of paper.

We first write down first the 0s and the first values of Fibonacci numbers.

We then compute F sub 2 as the sum of F sub 0 and F sub 1.

We then compute F sub 3, then F sub 4, and finally F sub 5.

And this strategy can be implemented in a straightforward way.

Namely, we first allocate an array of size n + 1.

We then initialize its first two values, and then in a simple loop,

we fill in this array with Fibonacci numbers.

Finally, we return F sub n.

And even this simple algorithm can be improved, and

you have probably already guessed how.

In fact, we do not need to waste so much memory.

Namely we do not need to remember the whole past.

It is enough to store just the two current Fibonacci numbers.

And this can be implemented as follows.

So we're going to store the previous Fibonacci number and

the current Fibonacci number.

Initially, the current Fibonacci number is equal to F sub 1.

Then in a simple loop, we just move the current Fibonacci

number one position to the right n- 1 times.

So in the end, current is equal to F sub n.

The resulting algorithm performs about n additions.

At the same time these are not additions of small numbers.

Recall that Fibonacci numbers grow exponentially fast.

From theoretical point of view this means that we should

not treat additions of such numbers as a basic operation.

From practical point of view, this means that, for example,

already F sub 100 does not fit into, for example, 64 bits.

So we need bignum arithmetic to store the results.

To summarize, the main idea of dynamic programming is

to avoid recomputing the same thing again.

This is usually done by saving all the intermittent results in a table.

We've used Fibonacci numbers to illustrate this idea.

It should be noted however, that Fibonacci numbers is probably a slightly

artificial example of using dynamic programming.

Just because it was clear from the very beginning what we need

to compute in order to solve the original problem.

We will see more interesting examples in the next parts.