Let me go back, and we're going to run this through Mathematica just to show you the steps. The setup, the truth, the measure, it's all the same. The q parameters I'm going to evaluate, that's all the same because we need the K matrix still. So I'm just going to evaluate all those cells. So now we're saying our initial guess is just going to be the sum of the weights. And you saw the true answer was very close to 2. But this isn't quite the true optimal answer yet. If I evaluate, but if I just would use the first guess, I didn't iterate. You can see, before we had 1.67 something as an error, now I get 1.70. I'm only 3/100 of a degree difference by using just that sum of the weights as the optimal answer, right? So that shows you quickly that okay, I can do better. But man, even with the first guess, I'm pretty darn close. That's exciting, so let's iterate for this. This is my determinant thing, so I made a function out of that so I can call it. And the derivative of that function with respect to this s value, that's the eigenvalues, right, that we have of this. This is where you find your classic stuff. And so let's just step through this. That's the function, a negative function. My initial one is just equal to that sum of the weights, that's equal to 2. And we saw, that got us to within a few hundredths of a degree to the answer. So I'm evaluating this, this f function, if 2 was the root of the determinant, this would have been 0. I'm close to 0, it's in the thousands, 10 to the minus 3, but I'm not quite there, right? Now I'm doing an adjustment with a Newton Rathsman method, and you can see after one adjustment, man, I'm only showing you some finite digits here. This starts to look pretty darn close to what we've seen with the eigenvalue eigenvector problem. And sure enough, my differences from 0 jump from 10 to the minus 3 to 10 to the minus 6. So one correction gave me three orders of magnitude better. Most applications are already done after one iteration. This is why it's so much faster than an eigenvalue eigenvector. It only takes a very small number of iterations. But we're not happy with 10 to the minus 6, are we? So we're going to do better. Here, I'm not showing enough values, but you can see with a second iteration, I go from 10 to the minus 6 to 10 to the minus 13. How many problems in life do you have that converge this nicely this quickly? That is pretty darn good. And so you can see here, I'll do it one more time, because why not? Now I'm down to machine precision. That's as good as it gets. So this is a really, really fast way to iterate with that good initial guess. And anyway, and that would give you, if I plug that one back in, I would get exactly the same answer as Davenport's Q method to machine precision, right? But it's a much faster way to get there. Good, so we have two ways to solve Waba's problem, Davenport's Q method, eigenvalue eigenvector, QUEST method. It has a good initial guess for that optimal eigenvalue and then an iterative way to get it, and then you have a closed form answer to get CRPs. And there are some rotations and things you have to consider to implement it. But it's still way, way faster.