So we just got done illustrating Qualcom's famous algorithm called Distributive Power Control, which solves the near-far problem. And it has a lot of characteristics of it that are very unique, and very nice and make it a very sophisticated algorithm that's easy to run. The first one, is that, it is completely, distributed. As we said the first word is distributed. So what exactly do we mean, by distributed? Well the idea is that, the tower, really doesn't have to do, much work at all. I mean, the computation is all happening, in each of these phones right here. All the tower has to do is measure all the SIR's, and then send them back to each of the cellphones. And each of the devices on the device side are going to actually update their own power levels. That's the first thing. The first point of exterior computation is that there's no centralized mechanism. There's no big supercomputer in here that is doing all the computation. There's none of that. So, all the computation happens on this end at the device site. Now, the second part of distributed computation. Is that, none of these devices have to have any operation of what the other devices are doing. All this device needs to know is what it's measured SIR is [SOUND] because the effect that each of these devices are imposing on this device [SOUND] is given to it in the form of that, this measured SIR. because it effects this. But it only needs to know its own measured SIR. It doesn't need to know anybody else's measured SIR. Thus needs the only conditions one need is channel conditions and anybody else's current transfer powers. Nothing. All it needs to know is are its own parameters. So when we say distributed computation we mean two things. First is that there's no centralized computation. It lies on device side to do all of the math and programming. And the second is that no device needs no knowledge of what the devices are doing. So, DPC is the solution to the near-far problem and has many other characteristics which are desirable as well. First is that it's highly scalable and that's result of it being distributed. The way to see that is to note that each of these devices are doing the computation. So, every time we put another device in the cell, it's not going to add any other strain on some centralized mechanism because we don't have something like that. So, as we will see later on in the course achieving scalability in networks is a very Important thing. And making things distributed is a way of achieving that scalability. Because every device that comes into the network gets to do its own work. And doesn't have to have something in the server doing its work for it. Another thing is that it is very inexpensive, in terms of computation. It's just a simple one step algorithm. It just takes that measured SAR and the desired SAR. And it takes the ratio, and it multiplies it by the current transmit power. And that's all it needs to do over and over again. And it really needs to be inexpensive because it's done something like 1500 times each second. So something happening that quickly really needs to be inexpensive. And it needs to be able to be done very fast. So that the levels can reconverge quickly.