0:15

Let's see if we can get something on Davenport's Q-method.

Â What can you tell me about that procedure?

Â >> [INAUDIBLE] >> All

Â the harassment paid off, there we go.

Â >> [INAUDIBLE] >> You asking as a question?

Â Are you asking me?

Â >> Yeah, I'm asking >> What is it?

Â >> I know you're minimizing, you're trying to minimize the cost function.

Â >> Yes.

Â >> I can't remember if it's exactly the same as Wahba's problem.

Â >> Nick, since you were kind of sleepy last time, too.

Â >> Yeah, so it's some crazy formulation

Â based on the combination of keys.

Â It's like I forget exactly how the equation-

Â >> But you're definitely on

Â the right track.

Â It was Wahba's problem.

Â Right, remember estimation theory, you've got some observation.

Â You may have multiple ones.

Â An observation is all a unit half vector, right?

Â It's large information just in those little scribbles.

Â Now what you want to find is the estimated body attitude

Â that takes the same observation in a known form,

Â which means you know your environment.

Â You know where you are.

Â So this is what the magnetic field should be doing at this location, right?

Â Times this matrix should be equal to this.

Â And to turn it into a cost function that we're trying to minimize,

Â you make one minus that.

Â Now this is still a vector.

Â I want a scalar cost function.

Â So we basically do the norm squared of that 3 by 1 matrix, pretty easy.

Â So that's this transposed with itself.

Â 2:39

>> The sum of them [INAUDIBLE] >> You're thinking of a different class.

Â That's estimation theory, where the sum of the probabilities have to add up to

Â one of all the possible cases, yeah.

Â Not the case here.

Â Andre, help him out, weights, what comes to mind?

Â >> [INAUDIBLE] the ratios.

Â >> The ratios, right.

Â So if the sensors are equally good, you can make 10 and 10.

Â We typically use 1 and 1 just because 1's such a simple number to have, right?

Â It could be .1, .1.

Â As long as they're equal, the math works out.

Â You get the same answer, and you can try this quickly when you do these little

Â tasks and try to solve some of these problems.

Â You can put in weights of 10, weights of 1.

Â You should get back exactly the same answer with Davenport's Q-method or

Â anything that solves Wahba's problem.

Â Yes, Matt?

Â >> So that's because it's [INAUDIBLE] of all this,

Â so you can divide by the biggest one and just do it all on one side?

Â >> Essentially, yeah, but also just the weighting,

Â it balances out that it doesn't shift the answer.

Â You're looking for the extreme end point of this cost function,

Â typically the minimum in this case, right, and this weight.

Â So just going to raise it all.

Â I could take this cost function and multiply it times 50, and

Â it's just going to scale things up.

Â The extremums will happen at exactly the same place.

Â That's not the way to think of this, right?

Â That's why whatever weight you come up with, I could take this cost function and

Â multiply it times any positive scalar at least.

Â And I'm not changing where minimums will occur.

Â I'm just stretching it out for some reason, that's all.

Â So good, this was Wahba's problem.

Â And yes, Matt, you're right.

Â Davenport's Q-method solves this.

Â Now let's see, Bryan.

Â How does Davenport's Q-method solve this?

Â Just give me a quick highlight.

Â >> Changes in the eigenvalue problem.

Â >> Through, do we solve it in terms of the DCM?

Â This cost function is written in terms of the DCM right now.

Â >> No.

Â >> What do we use?

Â What attitude coordinates?

Â >> Euler parameters.

Â >> Euler parameters, right.

Â So q that comes in a quaternion notation, at least, that's where it is.

Â q within our class is also sometimes used for CRPs, in fact.

Â Quest, you will see CRPs appearing, so just be careful with the notation there.

Â So yeah, so Davenport maps it off, changes this cost function,

Â realigns it into a nicely quadratic term, in terms of the quaternion.

Â And there was this 4 by 4 K matrix, so Brian's already said, okay.

Â So with this K matrix, the betas end up being eigenvectors of that.

Â That's where the extremums happen, so we did a constraint optimization.

Â Instead of minimizing this, we were able to rewrite it.

Â There was a separate function, g, that we had to maximize.

Â I'll just refer to your notes on that, right?

Â Which of these, if you have a 4 by 4, we have 4 eigenvectors, 4 eigenvalues.

Â Which one of these four is the optimal answer, Marion?

Â >> The maximum one.

Â >> The maximum one because we had to maximize this g.

Â This g in the end, you plug it in and it just ended up being lambda.

Â There's a few steps that we had to do there, right?

Â So that's really nice, out of an infinity of possible attitudes,

Â we narrow down to four.

Â And then by looking at we have to maximize g we come up with no,

Â it's just the one that's the biggest.

Â That's the key.

Â And now we can do that.

Â Good, that's Davenport's method.

Â It's a very elegant method, but what was the big challenge with this one?

Â Why don't we typically fly this one, Nathan?

Â >> because you don't want to solve an eigenvalue problem.

Â >> Exactly, that's at the heart of this one.

Â So we don't want to solve eigenvalues, so therefore quest, right?

Â 6:32

>> Less computing power.

Â >> True, that's definitely the direction of this, right?

Â That's why we went after quest.

Â So less computing means somehow we have to avoid the eigenvalue,

Â eigenvector evaluation of this kind of a thing.

Â Anybody remember what was the key insight with the quest algorithm?

Â >> The root solving method.

Â >> We made it a new root solving method, true.

Â How did we get there?

Â Matt, what was the insight that gets us to the root solving method?

Â >> If the sum of the weights is close to that largest eigenvalue.

Â >> Yes, so if you look at the cost function j, we can rewrite it as

Â the sum of the weights, I think minus this g.

Â And the g, we know is going to be lambda optimal.

Â So you can write lambda optimal is equal to sum of the weights minus j.

Â And this j is typically almost zero, hopefully.

Â It's small, right, because hopefully you don't have sensors that are 60 degrees

Â off, but just a fraction.

Â So you should get reasonably close, but we want to get as good as we can do, right?

Â So with that insight, and we saw numerical examples,

Â you could kind of, to first order say, well, this is just it.

Â Now this would give us not the true answer.

Â Some of the weights is not equal to the optimal eigenvalue, but it's close.

Â So now we have to solve an iterative problem really.

Â To find out the eigenvalue of a 4 by 4 matrix,

Â you take the matrix minus s times the identity of a 4 by 4.

Â And then you take a determinant.

Â 8:35

>> I'll guess CRPs, but I don't remember.

Â >> No, it was CRPs, yes.

Â We ended up dividing those betas by beta naught.

Â The last three lines gives you basically 3D math.

Â You can do a unique inverse, and you come up with CRPs, which is good.

Â But fundamentally CRPs can go singular if the attitude is 180.

Â And the way you avoid that is you don't just have one body frame.

Â You have a second body frame, typically just twisted 90 degrees about any axis.

Â So if one of them is 180 singular, the other one is fine.

Â And then you just use 90 degrees at additions,

Â subtractions to always reconstruct in a non-singular way what the attitude is.

Â And then you can map the quaternions again.

Â So there's ways around that but man, it's very, very fast.

Â Good, this also solves Wahba's problem.

Â What about OLAE, the optimal linear attitude estimator.

Â Does this one solve Wahba's problem?

Â 9:29

No, it's a different formulation.

Â So in fact, this one used the Cayley's theorem to rewrite.

Â And the key thing here is you can rewrite the estimation problem as a perfectly

Â linear estimation problem.

Â You still need two observations at least but I can do n, I can add weights,

Â I have all the other features.

Â But it's rewritten as a different optimization basically.

Â But also we're estimating here with Cayley's theorem,

Â we're getting q tildes which are the tilde version of the CRPs.

Â So we again get CRPs.

Â But using the same tricks as what we do with quest methods,

Â you could use sequential rotations to have two alternate body frames.

Â And one of them is always going to be non-singular.

Â And reconstruct the non-singular measure in the end like a DCN, MRP,

Â quaternion, like that.

Â But that was kind of OLAE that we have.

Â