Types of Errors. Ideally, when we did a hypothesis test, the conclusion from that test, whether to reject the null hypothesis or fail to reject the null hypothesis, would always correctly reflect the population. However, that is not possible since we are using sample data to make inferences about the population. Two different types of errors can be made in hypothesis testing. A type one error is when the null hypothesis is rejected when it is actually true. This type of error is also called alpha or the producer's risk. A type two error is a type of error made when the null hypothesis is not rejected, even though it should have been. This type of error is also called beta and referred to as the consumer's risk. Let's dig a little deeper into these errors. This table represents the four things that can happen when doing a hypothesis test. Ideally, if we knew the null hypothesis was false, of course we would reject it and if we knew the null hypothesis was true, we would fail to reject it. But here we're basing our decisions on a sample and it's very possible that the sample can lead us astray. If the data from the sample is statistically far enough away from the value that is in the null hypothesis, then we reject the null hypothesis. If we reject the null hypothesis, only one of two things could happen. It could have been the correct decision or we could have made a type I error. Note that in practice we may never know which outcome we have because we don't have the population data to check against. Also, note that if we choose to reject the null hypothesis, it is not possible to make a Type II Error because that type of error can only be made when the decision is to fail to reject the null hypothesis. Further, the probability of making a Type one error is alpha. And that is something that you get to choose, before beginning your hypothesis test. We'll talk about that more later. If the data from the sample is not far enough away from the value that is in the null hypothesis to cause us to statistically reject the null hypothesis, then we fail to reject the null hypothesis. If we fail to reject the null hypothesis one of two things could happen, it could have been the correct decision or we could have made a type two error. Just like with the decision to reject the null hypothesis, in reality, we may never know which outcome we have because we don't have the population data to check against. Also note, that whenever we choose to fail to reject the null hypothesis, it is not possible to make a Type one error. Because that type of error can only be made when the decision is to reject the null hypothesis. Further, the probability of making a Type Two Error is beta. And this can be calculated from other parameters in the test. Though, it is beyond the scope of this class. The opposite of this one minus beta is called the power of the test. That is the probability of making the correct decision. Typically if we decrease the probability of a Type one error this increases the probability of a Type one error and vice versa. Two ways to reduce both types of errors are by increasing our sample size or by decreasing variability of our process. We'll look at examples in the next video.