Measurement system analysis involves validating the measurement system. Lord Kelvin said, the grandest discoveries of science have been but the rewards of accurate measurement and patient long continued labor in shifting of numerical results. At the highest level of the measuring system there are handful of key questions that we must address. These questions seem rather intuitive and even obvious but we do find the answers to these questions can be harder to understand than we may suspect once we dig into the details of the process we are accessing. Ultimately our aim is one of process effectiveness or freedom from deficiencies. Are we capturing the correct data? Does the data reflect what is happening? Can we detect any changes in the process? Can we identify the sources of measurement error? And System Effectiveness which refers to the validity and reference of the measure to customer needs, these are measures external to the process that help to assist the outcome. Such as is the systems stable over time is it capable of generating and measuring data to support your decisions. And can the measurement system be improved? Or the possible sources of variation within the measurement system, there are sources of variation from the operator or the person actually conducting the measurements, or the gage or the equipment used to measure, as well as other sources of environmental reasons. Accuracy of an instrument sustain through calibration. An instrument's performance can often drift over time to you to where Temperature changes or barometric pressure changes. It is necessary to compare the measuring device to that of a known standard through operational definition. How often calibration is needed is based on how much drift is being tolerated over time. Calibration seeks to understand the mean or average output of the instrument, not the variation. Calibration helps us uncover measurement system bias. A shift in the mean is detected and adjustments are made to re-center the instrument. From this equation, you can see that by minimizing the measurement error, the observed value becomes closer to the true value. Accuracy is simply how close the agreement is between the mean output of the measuring device to that of a reference or a standard value. Sometimes you will see accuracy denoted as a range or a tolerance, such as plus or minus one-thousands of an inch. Accuracy seeks to understand the mean of the instrument, not the variation. Precision is different from accuracy, precision is the closeness of repeated readings to each other and not to the standard. Precision helps helps process owners understand random error which is part of the Measurement system variation. Precision is used to calculate the PT ratio. Seeing on the gage or on our video. The prime contributor towards total variation is measurement system variation. Detecting actual process variation's very difficult. You can see from this example that the blue curve Represents the observed variation in the collection of data. The red curve represents the measurement system variation from the repeated measuring of the measuring devices. The green curve represents the actual process variation. The purpose of the precision is to approach the green curve as close as possible to identify true variation of the mean. Here's a quick illustration on a difference between Precision and Accuracy. Using Target Analogy you can see the first target as where the arrows were precise, but not accurate missing the balls eye. A graphical view to the right indicates a screw to the right from the true value of the bull's eye, in the second target you say more arrows arround the bull's eye but not very precise. The graphical view to the right denotes the main is aligned with the true main, however the variation is spread out Indicated by the flatten curve. The third target, the arrows are both precise and accurate hitting the bull's eye. The graphical view shows the aligned observational main together the true main. It also denotes minimal variability this a higher peak with less deviation or spread. Stability is the drift and the absolute value over time with a gauge. When we examine the difference In the average of at least two sets of measurements obtained with the same gage on the same product at the different times, we refer of this is Gage Stability. The purpose of Calibration System is to eliminate stability errors by identifying, adjusting, and reducing these differences. Statistical Process Control techniques are used to control the stability of the process. Note that proper calibration can help eliminate stability error. Gage bias is defined as the difference between the observed mean of the measurements and a known standard. NIST, or the National Institute of Standards and Technology maintains these standards. It is often used interchangeable with precision. Bias has a direction like the bias is plus 25 thousandths of an inch. Periodic calibrations against the standard can correct bias. Gage linearity is measuring how accurate the instrument, device or gage is during the span. Or measuring range of the equipment. Periodic calibration can also eliminate linearity issues over a wide range of measurements. In essence, you can see from the illustration, that an instrument can produced two different biases over the observable range of time or linearity in consistencies.