[MUSIC] In this lecture, we'll continue talking about the effect sizes, but now we'll focus on the R family of effect size measures. These include correlations, but also effect size measures for ANOVA such as eta squared. Now, if we look into the scientific literature, there's an example of an effect that can be describe by a correlation. Namely, the Chocolate Consumption in a country is related to the amount of Nobel Prizes that are won. Lets take a look at the data, we can plot the amount of the Nobel Prize laureates on one axis. The amount of chocolate that's eaten in a country in the other axis and there we see a strong positive correlation. In this case, I don't want to discuss how correlation does not imply causation, that's obvious. But here I just want to point at the size of the effect. As I mentioned before, it's very important to always interpret effect size and in this case a correlation of 0.8 is huge. Such a big effect that that alone should make you wonder whether this is real data or not. Now if you look at correlations, we can again specify benchmarks for small, medium and large effect sizes. A small correlation of 0.1 as visualized here, again we're using the visualizations from R Psychologist, great website. Where you can play around with different visualizations for all sorts of basic statistical concept such as correlations and other effect sizes. So this is what the correlation of 0.1 looks like. A correlation that's medium. A medium effect size is a correlation of 0.3, visualized here as well. In a large correlation, a benchmark is set to 0.5. Now correlations range from zero, no effect, there's no relation between two variables. To either minus one, a perfect negative correlation, to plus one, perfect positive correlation. If you want to get a feel for what's correlations look like if you visualize them. I can really recommend this website guessthecorrelation.com where they have game of fights recognizing effect size measures. And if you play around with this website, you'll get a better understanding of what different effect sizes look like. Now previously we talked about the Cohen's D measure of effect size. Both Cohen's D and correlations can be used to calculate meta analysis. Therefore, it might not be very surprising that you can calculate the Cohen's D into a Cohen's R. You can convert the one into the other, this is the formula that you might use for this. Again, I'm not going to go into too much detail about this specific formula. But I just want to point out that it's possible to convert one effect size into the other effect size. And there are different spreadsheets online if you want to accomplish this. Now in ANOVA's or regressions, we don't really use the simple correlation but we used R squared or for ANOVA, eta squared, omega squared and epsilon squared. This effect size measures that express the proportion of the total variance that's explained by an effect. R squared and eta squared are slightly biased effect size estimators. And omega squared and epsilon squared are less biased. In this case, they are not perfectly unbiased, but they are pretty close. It turns out that epsilon squared is least biased effect size measure. Although omega squared is more often used and more widely recommended. Okada, 2013 suggests that this difference might be due to a faulty random number generator in one of the original papers that compared these different effect sizes. I thought this was funny to point out that even in this kind of simulation work, you can have reproducibility problems based on faulty random number generators. It doesn't matter too much, you can used both omega squared and epsilon squared. The difference between the two is actually quite small. Now conceptually, this effect size measures answer a simple question. How much does the relationship between x and y reduce the error in the data? Let's try to visualize this. On the left you can see a model where there is no relationship between the two variables. There's a horizontal line if there's an increase in one variable, there is zero increase in the other variable. Now we can calculate the sum of squares in this model where there's no relationship between the two variables. And the red squares indicates the sum of squares. So this is the total sum of squares if there's no relationship between the two variables. On the right, we allow for the model to have a relationship between the two variables. Now if there is an increase in one variable, we see there is also an increase in the other variable. We again calculate the sum of squares, now indicated by the blue squares. This is the sum of squares of the residuals. And R squared is simply the sum of squares of the residuals divided by the total sum of squares. Now if we use effect sizes for power analysis, you'll see that most statistical software asks for Cohen's f. Cohen's f can simply be calculated from eta squared using the formula that's on the screen here. You can also use statistical software to calculate Cohen's f from eta squared. A screenshot from G power, free statistical software, is on the right. Now it's important to recognize that if you want to calculate Cohen's f within subject design and you're using G power, there's something you need to keep in mind. In within subject designs, G power expects a different version of partial eta squared than statistical software such as SPSS or R would provide. Now this is great free software, so I'm not complaining. But for some weird reason, these researchers expect a different default than most other statistical software. This is only a situation if you use power calculations within designs. In these situations, if you want to directly input eta squared, you need to click on the option box that's available on the screen. And if you click in this option box, then you get a specific menu where you can say, I want to input partial eta squared as in SPSS. Also if you use different types of software such as R, then this is the option that you want to choose. If you forget to highlight this option, then the power calculations might be off quite considerably. Now very often in experimental designs, you're interested in the partial variance explained by only one factor. For example, a factor that you manipulated. In these situations, you will calculate partial eta squared or again, the less biased partial omega squared or partial epsilon squared. Now in a one-way ANOVA, eta squared is identical to partial eta squared. So if you're a little bit lazy, you don't have to report this small p, you don't have to type it in. It's easier just to say eta squared in this situations. Cohen has provided benchmarks to define small, medium and large Cohen's fs. Cohen's f of 0.1 is small, 0.25 is medium and 0.5 is considered the large effect. Now we can use the formula that we saw earlier to calculate Cohen's f from eta squared or vice-versa. And then we can also create benchmarks for eta squared that are small, 0.0099, medium, 0.0588, and large, 0.1379. Now this is a lot of digits and this is precision that's not really warranted. But this is a consequence of directly converting the effect sizes that Cohen used into eta squared. Now Cohen talks about eta squared when he defines these benchmarks. But he actually means partial eta squared. So you should use these benchmarks for partial eta squared, not for eta squared itself. Another important thing to keep in mind. Nowadays we also see more novel versions of eta squared. And one very useful one, which removes this difference between within and between subject effect sizes. And can generalize between within and between subject designs is generalized eta squared, developed by Olejnik and Algina. Now I just want to point out that this exists. If you're interested in reporting effect sizes or if you use the statistical software that automatically provides these generalized eta squared values. They're very useful and might be the best to report. In this lecture, we talked about the R family of effect sizes. Which allow you to quantify the degree of association between two variables. I pointed out some peculiarities when you used eta squared in within subject designs when you perform power calculations in G power. And provided some benchmarks that you can use to interpret these effect sizes if you report them in your result section. [MUSIC]