Lets start by writing out the routing matrix here'kay? The routing matrix for this problem is clearly the following. Okay. It is a four by three matrix, because there are four lengths and three sessions. So the first column, you can either write column-wise or row-wise. The first column corresponds to session a, which uses links 1, 3, and 4 so it's 1011. Okay? Because the first, third, and fourth links are used, the second link is not used by session a. The second column corresponds to session b, which uses links one, two. So the first two positions are ones and then zero. Third column: session c, which only uses link four, so 0001. Can also see that link one, the first row of the routing matrix, is used by two sessions, A and B. So now, we can write down the constraint whihch is 101111000001 and multiplying. The rates of these three sessions. Since we label them by a, b, c, we'll call them xa, xa, xb, xc. Less than or equal to the vector, four by one vector, the link capacities. As mentioned before, we assume unit capacitor on each link. One megabit per second. That's the constraint and of course, we need all the xa, xb, xcs to be non-negative. To be physically meaningful. Now the objective function. Is to maximize the sum of utilities. Let's say, they all have the same utility shape and they are logarythmic functions to enforce proportional fairness. So, maximize log of XA, plus log of XB, plus log of XC, okay? Now this is an optimization problem. Maybe the fourth one or fifth one we've encountered throughout this course. And we have a couple of more coming up in the future like this. Okay? Maximize, a decompost, logarithmic, as a concave fucntion, subject to linear capacity constraints and non-activity. Now this matrix induces coupling but, as in the advanced material, we can decouple that. And we're going to run the following, algorithm, okay? The algorithm says that xi equals u prime inverse of the Qis and plcurrent equals current pla plus a step size times yl minus cl. This whole thing must be non-negative. Again, remember short hand notation. Qi is the sum of all those link prices along the links used by source i and we'll also later use, this is used here, right here. And the shorthand notation PR is the summation of the link is the summation of the source race coming from the sources i using link F, okay? So with these shorthand notation, I'm just summarizing what we just derived in the last module. Alright, so let's write this out. Let's just say we initialize the p's. That's p1 equal p2, equal p3 equals p4 at initialization times zero, all to be say, one unit. As step size better be one unit in a, in a Homar problem you will experiment with different step sizes, and you'll see that one step size is too big you will lead to convert. you will lead to divergence, but when the step size is too small then it guarantees convergence or the convergence is very slow. So there is a trade off between guarantee of convergence. Versus the speed of convergence if you actually can get it, all right? But now we've picked beta to be one to illustrate the algorithm. Not iteration starts like this, okay? At time1, equal to 1, we have the following, okay? First look at the qs, okay? For session ap1+p2+p3, equals p1 plus p2 plus p3, we say from the last iteration, so that is 11+1=3 + 1 + 1 equals 3 units and QB, path price for session B is 2. QC path price for session C is one unit. Well, this is easy to see and then say well it's just telling was basically that session A uses three lengths, session B uses two, session C uses one. Isn't this the flawed intuition we mentioned early on? Right? Shouldn't we be counting the bottleneck lengths, not just the number of lengths? Well we'll come to that actually right around the corner in the second iteration. But at this point we have not captured that fact, yet. As soon as the source start responding, we will see. Now the source rates, therefore, work like this. Okay? It is u' inverse of q, right? Well, for log utility function, the demand function, u prime inverse is just reciprocal 1a. over qa that means 1 / 3 okay? And XB is 1 / 2 and XC is just 1. These are measured in megabit per second. Okay. So at this point, you see that what happens to the link loads. You see that clearly the link load on link four exceeds what it has. We're adding. one-third and one that's bigger than one unit any has. The other three links actually. Okay? So you expect at the next iteration., okay? The prices on the first three links should be less than should become smaller. and, the price on the, next, on the fourth link should be higher. And that's indeed the case. Okay? Now, P1. Should be the original P1 from loss iteration, plus beta which is just one times the link load, current link load, minus C. Okay? So last time it was 1 and now. What we have is Y1, which is one over three + one over 2, okay, minus C1 which is one. This is already past the number. So that's okay. This is in three decimal places, accurate, 0.833. P2 follows the same pattern so we have to look at basically one plus. Well, half minus one, that is 15. Similarly you can write out P3, which is 1333, and P4, which is 1 plus the load, which is 1.33, minus, capacity one. Again, this is the capacity one megabit per second. This was the last iterations priors, even though they both are one, they have totally different physical meaning and units, okay? And now. With this beta however the units basically get converged. The beta numerical value is one, okay? So this means, is 1.33. Just as we expect intuitively. Now, the first three links price drop, because you are now fulling using the capacity but the fourth lengths price actually increases from one in the last iteration. And now, we can write down the Qs. The QA is now this number plus this number plus this number. Add up to 2.5. Qb is 1.33. Qc is 1.33. This implies next time this was right. Xa is one over 2.4, which is 2.5 which is.4. Xb is one over, 1.33 which is.75 and Xc is 1.1 over 1.33, which is 0.75. And now you already see that sessions b and c are being treated the same way as driven by basically the topology. They both use one bottleneck link at iteration two. That fact is implicitly captured because by this time we see both at the same number, okay? And both are bigger than 0.4 for this session a that uses two bottleneck links. And now from x you can update y, from y you can update p, from p you can update q, back to update x. And this iteration continues. And this is the iteration that you see over about fourteen iterations. This graph shows the source race. This graph shows the link cost. So you've got three curves, XA, XB, XC here. And you can see them converging to what? To the optimal solutions being XA star is one-third. XB star equals XC star equals two-third. Exactly the intuitive answer we mentioned as the proportionately fair and efficient, and of course feasible, answer to the num problem. The corresponding link cost at convergence are P1* = P2* = P4* which is 1.5. Where as P2* and equals P3* which is 0. So we can see, they actually get to zero within 4 iterations. And just to have a sanity check, does this make sense? Indeed, you can see the routing matrix multiplying this answer of one-third, two-thirds, two-thirds equals one times one and then two-third, one-third, one which is indeed less than or equal to one, one, one, one. The capacity of one mega per second for everything. So it is feasible and you can check sufficiency and you can check disproportional fairness but you can also check that hey, links one and four are kind of different. Right? Because they are completely used up in capacity. They form the bottleneck link. And we can detect bottleneck links because the corresponding prices are positive. But the so-called complementary selectiveness property in the Lagrange duality theory of optimization problem, which we will briefly mention, the mass material part, we can say that when the optimal price is actually non-zero, strictly positive, then the corresponding links must be bottleneck links. Conversely, if the process for some links are actually authored, if the links are not bottleneck links. For example, links two and three, they are slack, you are not even fully utilizing the given capacities, okay? They did not cause the reason for you to stop adding rates to the sources, then the corresponding optimal prices must be zero. This is called the complementary slackness property. Alright, so that was a numerical example. You may feel a little disconnect between the first two modules of the course and the last two module okay? Between the engineering artifact of TCP congest control protocols on the one hand, and the mathematical models of capacity allocation through a non-distributed solution and actually there is a tight coupling. About ten years after the invention of TCP Tahoe. Applied mathematicians and engineers collectively figure out a good way to understand the engineering artifact of TCP protocols. In particular, they have reverse engineered TCP renode as implicitly maximizing a specific utility function, arctangent. And packet loss, used as the congestion price or link price or mathematically the Lagrange dual variables piece. Whereas for TCP Vegas is, been reverse engineered to be implicitly maximizing for logarithmic utility with delay used as the link congestion price. In other words, if you give me an engineering protocol, I can give you, in return, the underlying optimization or gain problem implicitly being solved for by these protocols. Now this is a weird angle. It's like, you give me the solution, I'll tell you the problem. You may say, I've already got the solution, why do I care about the problem. Well, knowing the problem is a good way to do forward engineering. For you to be able to understand why sometimes it works, sometimes it doesn't, and how can I improve it's working. And indeed reverse engineering had lead, led to several interesting implications. For example, one implication is that for TCP Rino, if you increase the buffer size of these routers, it does not help with equilibrium packet loss. Mathematically does tribute to C, because in TCP Reno, the non-problem is maximizing our tangent utility. Okay? Subject to link capacity constraints. The buffer size does not even come into the problem definition. So if you are the solution, though, to a problem not even defined by buffer size. The solution cannot possibly depend on buffer size. And packet loss is the solution to the dual problem. So packet loss cannot be dependent on the buffer size at equilibrium. Physically, what happens is that you can make the buffer bigger and bigger. It will just delay the onset of congestion. We just have to wait a bit longer until all the sources pump so much that for any finite sized buffer, you will overflow it. So you make congestion start later, but you not avoid congestion at equilibrium. Another implication this time for labels is that because it's been reverse engineered and shown to be maximizing logarithmic utility, it actually achieves proportional fairness in the allocation. If it can converge, back to that question, can it converge? Well mathematically we'll say we need small enough step size beta which is not always the case for Vegas. But you can change the game parameter and the sources as inspired by this reverse engineering, and carry out a forward engineering. This actually what has led to the development. And deployment of fast TCP which has been commercialized In certain, long distance, high delay bandwidth kind of projects. high delay bandwidth product kind of networks. Okay, so that's an example of going from reverse engineering to forward engineering. You know delay based, congestion like TCP, contra porto, is going to give you some good result like proportional fair, provided you can make it converge. And now you can work hard to make it converge. Now we're going to go through a little bit of reverse engineering in the last material part of the video, and then a little bit further in one of the homework problems. If you're interested in going through this long advanced material section. As you can tell it is the longest advanced material I guess across all the lectures in this court. Now this spirit of reverse engineering is not new to us. We were trying to reverse engineer topological. Property such as, small world S scale three. And we saw the potential pitfalls of reverse engineering without predictive power. Now we are reverse engineering functionality, especially protocols such as congestion control. And we'll see that you're going to validate it with empirical evidence, and indeed it has been, and it has been used for forward engineering. Okay. So in summary, we have touched on quite a few different corners today. We talked about the principles, five principles of distributing congestion control. It is a network version of the economics principle of demand-supply regulation. We've also looked at the mechanics of some of the popular congestion for particles in TCP. Then we looked the mathematics of capacity allocation formulated as an optimization problem and solved distributively. But equally important are the messages in the big picture. One is that feedback signals can be generated. And then used for distributive coordination in a network, in this case, is the Internet. So we can view the Internet. TCP as the largest manmade machine solving, an optimization in the form of none. Just like we can view Google page rank as the largest man made machine of eigenvector computation, you can view TCP in the Internet as a largest man made machine implicitly solving for network utility maximization. And that insight came from this reverse engineering angle that says you can view network protocols like those controlling the Internet. Analyze it and even design it as a control law. It's easy for me to say today, but back fifteen, twenty years ago, it was quite a innovative angle. With that, we're going to stop our discussion on TCP/IP foundation of the internet and move on into P2P network in the next lecture.