Okay, and what about the timestamp 3?

It's going to trigger because we don't know where it was before.

But using the rule of some of probabilities we can kind of check all

the possibilities for the previous state and

use that to compute the next probabilities.

So the law of some tells us that the value of the state of the timestamp

3 equals to the value of the state of the timestamp 3,

given that at the second timestamp it was on the left,

times the probability it was on the left on the second timestamp.

Plus the same thing, considering that the station,

it was on the right on the second timestamp.

And we can just, we know all these values because the conditional

probability is just the transition probability.

So the probability that the frog moves from left to right for example,

and the probability of x2 are computed in the previous line,

so it is what we have just computed.

And we can write it down, so it will be for probability of the left,

in the third timestamp.

It will be something like 0.3 because it's

the probability of transitioning from left to left,

hence the probability of 0.3.

So it's from the previous line of the table.

Plus the probability of the prior probability of being

on the right hand side of the previous timestamp,

which is 0.7 times the transition probability

of moving from right of, right to right, which is 0.5.

I'm sorry, from right to left which is 0.5.

And we can write the same thing for the for

being at the right state, in the timestamp 3, and

we'll have some numbers like is which are around 44 and 56.

And we can continue this table, for as long as we want.

And we'll found out that the table kind of

converged the numbers around 42 and 58.

So after 1 million steps,

the probability is that the frog is on the left is almost exactly 42.

And the probability that the frog is on the right is almost exactly 58.

Which basically means that we, if we stimulate our frog for long enough and

then write down where it ended up with the one millionth step.

Then we will kind of get a sample from this discreet distribution of the values.

So with probability 42 we'll get left, and with probability 58 we'll get right.

So let's look how it works.

Say we select our frog for like 1 million steps,

and the last step was for example left, okay?

We write down this left and then simulate another [INAUDIBLE] for the state space.

So begin simulation and the last state was for example right.

We are the down and by then phase several types.

We have a sample from distribution of the millionth step of

our dynamic system given that the frog started on the left.

And this thing will be close the probability 42 and 58.

So even here we can see that we have two out of five lefts,

and three out of five rights, which are close to 42 and 58.

So this way we can generate, it's a really lousy way to

generate samples from this distribution with the radius.

So there's a much simpler way which we have just discussed in the previous video.

But note that this way, so, build a dynamic system and simulate it and

see where it end up, works even for more complicated cases.

So what if you have for example ten lilies?

Or 1 billion lilies?

If you want to simulate your distribution by in naive way.

If you have billion values for in your variable, you'll have to spend

amount of time proportional to 1 billion to generate one sample from it.

If you use this kind of idea of simulate some dynamic system and

then write down the last position.

And if you do not that many steps like 1000 and hope that everything has

converged to the true distribution, then you can get a random number from this.

Complicated distribution without much trouble too much trouble

of the naive approach the example.

Or maybe a frog can be continuous, and

can jump from any point on your 2D plane to any other point.

In this case, you will be able to generate your samples from

a continuous distribution by simulating your dynamic system.

So the reason that you can still sample efficiently,

even if you have this continuous dynamic system,

by just writing down the path of this 2D plane, and writing down the last position.

So the idea Markov Chain simulation applied

to generating samples of random values is as follows.

We have a distribution we want to sample from.

It may be continuous, it may be discrete.

We build a Markov Chain that has this distribution,

that converges to this distribution, okay?

Then we start with from a initialization, x0, and then we simulate

our Markov Chain for long enough, for example for 1,000 iterations.

[COUGH] So we start with x0 then we generate x1,

by using the transition probability and etc.

And then we hope that after 1,000 steps,

you will converge to the true distribution p(x).

Because we built our Markov Chain this way.

Then, after these 1,000 steps,

you can throw away the first 1,000 samples, 1,000 states.

And then you can start writing down the states after these 1,000 steps.

Because they all will be from the distribution p(x),

or at least close to it, if your Markov Chain converged close enough.