>> The sum of them [INAUDIBLE] >> You're thinking of a different class.

That's estimation theory, where the sum of the probabilities have to add up to

one of all the possible cases, yeah.

Not the case here.

Andre, help him out, weights, what comes to mind?

>> [INAUDIBLE] the ratios.

>> The ratios, right.

So if the sensors are equally good, you can make 10 and 10.

We typically use 1 and 1 just because 1's such a simple number to have, right?

It could be .1, .1.

As long as they're equal, the math works out.

You get the same answer, and you can try this quickly when you do these little

tasks and try to solve some of these problems.

You can put in weights of 10, weights of 1.

You should get back exactly the same answer with Davenport's Q-method or

anything that solves Wahba's problem.

Yes, Matt?

>> So that's because it's [INAUDIBLE] of all this,

so you can divide by the biggest one and just do it all on one side?

>> Essentially, yeah, but also just the weighting,

it balances out that it doesn't shift the answer.

You're looking for the extreme end point of this cost function,

typically the minimum in this case, right, and this weight.

So just going to raise it all.

I could take this cost function and multiply it times 50, and

it's just going to scale things up.

The extremums will happen at exactly the same place.

That's not the way to think of this, right?

That's why whatever weight you come up with, I could take this cost function and

multiply it times any positive scalar at least.

And I'm not changing where minimums will occur.

I'm just stretching it out for some reason, that's all.

So good, this was Wahba's problem.

And yes, Matt, you're right.

Davenport's Q-method solves this.

Now let's see, Bryan.

How does Davenport's Q-method solve this?

Just give me a quick highlight.

>> Changes in the eigenvalue problem.

>> Through, do we solve it in terms of the DCM?

This cost function is written in terms of the DCM right now.

>> No.

>> What do we use?

What attitude coordinates?

>> Euler parameters.

>> Euler parameters, right.

So q that comes in a quaternion notation, at least, that's where it is.

q within our class is also sometimes used for CRPs, in fact.

Quest, you will see CRPs appearing, so just be careful with the notation there.

So yeah, so Davenport maps it off, changes this cost function,

realigns it into a nicely quadratic term, in terms of the quaternion.

And there was this 4 by 4 K matrix, so Brian's already said, okay.

So with this K matrix, the betas end up being eigenvectors of that.

That's where the extremums happen, so we did a constraint optimization.

Instead of minimizing this, we were able to rewrite it.

There was a separate function, g, that we had to maximize.

I'll just refer to your notes on that, right?

Which of these, if you have a 4 by 4, we have 4 eigenvectors, 4 eigenvalues.

Which one of these four is the optimal answer, Marion?

>> The maximum one.

>> The maximum one because we had to maximize this g.

This g in the end, you plug it in and it just ended up being lambda.

There's a few steps that we had to do there, right?

So that's really nice, out of an infinity of possible attitudes,

we narrow down to four.

And then by looking at we have to maximize g we come up with no,

it's just the one that's the biggest.

That's the key.

And now we can do that.

Good, that's Davenport's method.

It's a very elegant method, but what was the big challenge with this one?

Why don't we typically fly this one, Nathan?

>> because you don't want to solve an eigenvalue problem.

>> Exactly, that's at the heart of this one.

So we don't want to solve eigenvalues, so therefore quest, right?