What we have is Y1, which is one over three + one over 2, okay, minus C1 which

is one. This is already past the number.

So that's okay. This is in three decimal places,

accurate, 0.833. P2 follows the same pattern so we have to

look at basically one plus. Well, half minus one, that is 15.

Similarly you can write out P3, which is 1333, and P4, which is 1 plus the load,

which is 1.33, minus, capacity one. Again, this is the capacity one megabit

per second. This was the last iterations priors, even

though they both are one, they have totally different physical meaning and

units, okay?

And now. With this beta however the units

basically get converged. The beta numerical value is one, okay?

So this means, is 1.33. Just as we expect intuitively.

Now, the first three links price drop, because you are now fulling using the

capacity but the fourth lengths price actually increases from one in the last

iteration. And now, we can write down the Qs.

The QA is now this number plus this number plus this number.

Add up to 2.5. Qb is 1.33.

Qc is 1.33. This implies next time this was right.

Xa is one over 2.4, which is 2.5 which is.4.

Xb is one over, 1.33 which is.75 and Xc is 1.1 over 1.33, which is 0.75.

And now you already see that sessions b and c are being treated the same way as

driven by basically the topology. They both use one bottleneck link at

iteration two. That fact is implicitly captured because

by this time we see both at the same number, okay?

And both are bigger than 0.4 for this session a that uses two bottleneck links.

And now from x you can update y, from y you can update p, from p you can update

q, back to update x. And this iteration continues.

And this is the iteration that you see over about fourteen iterations.

This graph shows the source race. This graph shows the link cost.

So you've got three curves, XA, XB, XC here.

And you can see them converging to what? To the optimal solutions being XA star is

one-third. XB star equals XC star equals two-third.

Exactly the intuitive answer we mentioned as the proportionately fair and

efficient, and of course feasible, answer to the num problem.

The corresponding link cost at convergence are P1* = P2* = P4* which is

1.5. Where as P2* and equals P3* which is 0.

So we can see, they actually get to zero within 4 iterations.

And just to have a sanity check, does this make sense?

Indeed, you can see the routing matrix multiplying this answer of one-third,

two-thirds, two-thirds equals one times one and then two-third, one-third, one

which is indeed less than or equal to one, one, one, one.

The capacity of one mega per second for everything.

So it is feasible and you can check sufficiency and you can check

disproportional fairness but you can also check that hey, links one and four are

kind of different. Right?

Because they are completely used up in capacity.

They form the bottleneck link. And we can detect bottleneck links

because the corresponding prices are positive.

But the so-called complementary selectiveness property in the Lagrange

duality theory of optimization problem, which we will briefly mention, the mass

material part, we can say that when the optimal price is actually non-zero,

strictly positive, then the corresponding links must be bottleneck links.

Conversely, if the process for some links are actually authored, if the links are

not bottleneck links. For example, links two and three, they

are slack, you are not even fully utilizing the

given capacities, okay? They did not cause the reason for

you to stop adding rates to the sources, then the corresponding optimal prices

must be zero. This is called the complementary

slackness property. Alright, so that was a numerical example.

You may feel a little disconnect between the first two modules of the course and

the last two module okay? Between the engineering artifact of TCP

congest control protocols on the one hand, and the mathematical models of

capacity allocation through a non-distributed solution and actually

there is a tight coupling. About ten years after the invention of

TCP Tahoe. Applied mathematicians and engineers

collectively figure out a good way to understand the engineering artifact of

TCP protocols. In particular, they have reverse

engineered TCP renode as implicitly maximizing a specific utility function,

arctangent. And packet loss, used as the congestion

price or link price or mathematically the Lagrange dual variables piece.

Whereas for TCP Vegas is, been reverse engineered to be implicitly maximizing

for logarithmic utility with delay used as the link congestion price.

In other words, if you give me an engineering protocol, I can give you, in

return, the underlying optimization or gain problem implicitly being solved for

by these protocols. Now this is a weird angle.

It's like, you give me the solution, I'll tell you the problem.

You may say, I've already got the solution, why do I care about the

problem. Well, knowing the problem is a good way

to do forward engineering. For you to be able to understand why

sometimes it works, sometimes it doesn't, and how can I improve it's working.

And indeed reverse engineering had lead, led to several interesting implications.

For example, one implication is that for TCP Rino, if you increase the buffer size

of these routers, it does not help with equilibrium packet loss.

Mathematically does tribute to C, because in TCP Reno, the non-problem is

maximizing our tangent utility. Okay?

Subject to link capacity constraints. The buffer size does not even come into

the problem definition. So if you are the solution, though, to a

problem not even defined by buffer size. The solution cannot possibly depend on

buffer size. And packet loss is the solution to the

dual problem. So packet loss cannot be dependent on the

buffer size at equilibrium. Physically, what happens is that you can

make the buffer bigger and bigger. It will just delay the onset of

congestion. We just have to wait a bit longer until

all the sources pump so much that for any finite sized buffer, you will overflow

it. So you make congestion start later, but

you not avoid congestion at equilibrium. Another implication this time for labels

is that because it's been reverse engineered and shown to be maximizing

logarithmic utility, it actually achieves proportional fairness in the allocation.

If it can converge, back to that question, can it converge?

Well mathematically we'll say we need small enough step size beta which is not

always the case for Vegas. But you can change the game parameter and

the sources as inspired by this reverse engineering, and carry out a forward

engineering. This actually what has led to the

development. And deployment of fast TCP which has been

commercialized In certain, long distance, high delay

bandwidth kind of projects. high delay bandwidth product kind of

networks. Okay,

so that's an example of going from reverse engineering to forward

engineering. You know delay based, congestion like

TCP, contra porto, is going to give you some good result like proportional fair,

provided you can make it converge. And now you can work hard to make it

converge. Now we're going to go through a little

bit of reverse engineering in the last material part of the video, and then a

little bit further in one of the homework problems.

If you're interested in going through this long advanced material section.

As you can tell it is the longest advanced material I guess across all the

lectures in this court. Now this spirit of reverse engineering is

not new to us. We were trying to reverse engineer

topological. Property such as, small world S scale

three. And we saw the potential pitfalls of

reverse engineering without predictive power.

Now we are reverse engineering functionality, especially protocols such

as congestion control. And we'll see that you're going to

validate it with empirical evidence, and indeed it has been, and it has been used

for forward engineering. Okay.

So in summary, we have touched on quite a few different corners today.

We talked about the principles, five principles of distributing congestion

control. It is a network version of the economics

principle of demand-supply regulation. We've also looked at the mechanics of

some of the popular congestion for particles in TCP.

Then we looked the mathematics of capacity allocation formulated as an

optimization problem and solved distributively.

But equally important are the messages in the big picture.

One is that feedback signals can be generated.

And then used for distributive coordination in a network,

in this case, is the Internet. So we can view the Internet.

TCP as the largest manmade machine solving,