0:14

So, if you think about Measurement Systems, what you're concerned about is,

you're concerned about variation that might be coming from different places.

So when you think about a measurement system,

you should be thinking about, are people trained to measure it the same way?

Are the devices going to be calibrated such that they stay consistent over time?

Are the measurement instruments going to be such that they don't get affected

by who is using them, or by dropping them, or things like that?

And again if you're thinking about perceptual measurements, you're thinking

about surveys, are they going to mean the same thing to the same people?

Is the wording going to be

current when you're talking about using the same battery of questions over time.

So you should be thinking about that as well.

And then the procedures to actually do the measurement.

So how are these going to be actually done?

Are they going to be done taking five samples and taking the average and

then how exactly are you going to get that measurement to be done.

Because you're trying to get away from getting variation from the measurement

system itself, or errors from the measurement system itself.

What kind of standards to you have for the measurement system?

And then what kind of standards do you establish for

saying something is beyond a particular threshold or not.

And then training people for using those measurement systems.

Training people to do this correctly, getting a demonstration.

So when you think about a criteria and a test,

you should be able to demonstrate this and

somebody should be able to understand it and then be able to replicate it for you.

They should have the same meeting of that critical to quality characteristic and

the criteria and the test that you do so that when anybody does this,

they should be able to do it the same way and get the same result.

If it's the same packet, they should be able to get the same result.

There shouldn't be variation in results from measuring the same thing over time.

So what are we concerned about when we are talking about measurements?

And some of this can be looked at based on data.

When we collect data, we can take a look at the variation in the data and

we can parse that variation out into does it look like.

Does it look like there's variation that's coming in because of measurement?

So what are some of the things that we're concerned about with measurements?

With measurements we're concerned about sampling bias.

So, did we choose particular samples at a particular time all the time?

And one way to get around it is have some sense of random sampling and

even including some kind of random sampling with some rules, like saying,

we take a random sample from the 8 o'clock batch or

we take a random sample for every hour's batch and so on and so forth.

So depending on what is it that you're trying to look at.

If it's going to be a hypothesis test, then random samples are better.

And if it's going to be looking at a process' performance,

then you want it to be timed and make sure that you're getting a sample to represent

each of those times, or each of those days of the week and so on and so forth.

But you have to be aware of the idea that your

sampling can bring in some bias into your measurement.

And again you could be thinking of perceptual measurements and

how did you sample the people that you talked to them,

the customers that you talked to or employees that you talked to in

order to get some sense of whatever you are measuring.

3:41

What we are concerned about is repeatability by the same appraiser.

So, if you give the same person the same object to weigh over time,

the weight should be the same, right?

If you give it to them today and you give it to them tomorrow and day after,

the weight should be the same.

That's what we mean by repeatability.

Reproducibility is, if you give me that object and if you give my kids that

object, if they're trained the same way, they should get the same measurement.

So it's multiple appraisers should get the same measurement.

That's what we call reproducibility.

We're concerned about linearity over range.

If you think about looking at any weighing instrument,

it usually says this is accurate up to a certain weight.

If you go beyond that weight, don't expect it to be accurate, right,

that's what they're trying to say.

So when you have a measuring scale that is meant for measuring your spices,

it's going to be a small measuring scale and if you're trying to take a pound of

flour and you're trying to weigh it on that.

It may not be calibrated to go beyond half a pound of weight.

So if you're trying to measure something like a pound of flour,

don't expect to get accuracy.

And that's what we mean by linearity over range,

that it's not going to be linear over range.

It should be strictly linear over range if you're talking about a continuous

measurement and if it's not then there's a problem with it.

Stability over time is that if you do the measurements over time, they should

give you the same kind of result if it's you're talking about the same thing.

So, in order to work with measurements, in order to get a sense of

the measurements and how good they are, you can do something called a Gage R&R,

a reproducibility and repeatability analysis.

And this can be based on getting data from multiple respondents for multiple objects.

And getting them to do the repeated kind of measurements from that same.

We collect that data and you start to look at if

there was any variation in the data, and then you look at whether that could be

coming from training of people, or whether that could be coming from the instrument,

or whether that could be coming from instructions that you're giving people.

So gauge R&R is something that you can use based on data to look at these kinds of

questions.

Now when we turn to perceptual scales, also we can do similar kind of analysis,

and here I have an example of an employee satisfaction item.

So this is a single item that says my supervisor

encourages innovation by tolerating failure.

Respond to this on a scale of 1 to 5 from Strongly Disagree to Agree,

and a 9 of not applicable.

So, here are the options that you have, how would you test this?

You would test this based on a test, retest reliability.

You are giving multiple administrations of the same scale to respondents.

And you give again to them at different times,

or you could be testing people who are working at exactly the same conditions,

but you are trying to get inter-rater reliability.

You're measuring whether two people would be giving the same response about

the same thing when you expect it to be the same.

So that's how you could be testing this.

And again you could do data analysis for this.

You can collect the data and

do some assessment of the variation that you're seeing in the data

to see where that variation is coming from or might be coming from.

So in some about the quality of process data, what can we say?

We can can be looking at the validity of data, its measuring,

what it's supposed to measure, its measuring of something that we expect

to be measuring and it's also giving us a comprehensive measurement of something.

So it's something that's useful for

us to make an assessment about the product or the process.

So that's what we mean by validity.

It's reliable, it's consistent and accurate, it's sensitive to changes.

There should be data,

there should be measurement that should be sensitive to changes.

It should be calibrated to a level of granularity that it's sensitive to

changes.

Otherwise, you're not going to be able

to make out the difference between two different levels when you are supposed to.

And it should be accessible.

It should be accessible to the people who are going to do the measurements, and

who are going to use it in a timely fashion.

So it should be accessible and be able to be used in a timely fashion.

So the simpler it is, the more people can use it.

And the simpler it is, the more quickly you can get that measurement and

the results from that measurement, for it to be used in a timely fashion.

Finally in terms of using data for six sigma and how it can be used for six

sigma, so we've talked about this earlier with the idea of process control charts.

So what are process control charts,

they look at the inherent potential of the process.

So we can use juridical process control charts to figure out,

what is the common cause variation in the process and

what is beyond that, and what is the special cause variation?

So we can look at the performance of the process based on process control charts.

And then once we've established that a process is in statistical control

using SPC, using statistical process control charts,

we can do process capability analysis or Cp and Cpk ratio calculations.

And these are meant to take the voice of the process and

compare it with the voice of the customer.

How does all this relate back to the idea of 6-sigma?

So, once you learn about where, once you learn about calculating these

process capability ratios, the Cp and the Cpk values.

How you can relate this back to the idea of 6-sigma is that a Cp and

Cpk value of 1, exactly 1 indicates that it's a 3-sigma process.

What that's saying is that, if you were to look at the distribution of the process,

and you go plus or minus three standard distribution by going plus or

minus three standard deviations from the mean.

You're going three standard deviations from the mean on either side.

You reach the upper and

the lower limit of the specifications that are given to you by the customer.

So plus or minus three standard deviations from the process matches the upper and

the lower limits that are given to you by the customer.

The spec limits that are given to you,

the specification limits that are given to you by the customer.

So a 3-sigma level process means a Cp or Cpk value of 1 and

a 2 Cpk value indicates a process that's working at a 6-sigma level.

So when we say a 6-sigma level performance,

we're in fact saying that it should be at a 2 Cp, Cpk level.

And remember we're adjusting for that 1.5 sigma, the addition of that 1.5 sigma that

we've talked about earlier in different sessions in this particular course.