The course presents an overview of the theory behind biological diversity evolution and dynamics and of methods for diversity calculation and estimation. We will become familiar with the major alpha, beta, and gamma diversity estimation techniques.
Understanding how biodiversity evolved and is evolving on Earth and how to correctly use and interpret biodiversity data is important for all students interested in conservation biology and ecology, whether they pursue careers in academia or as policy makers and other professionals (students graduating from our programs do both). Academics need to be able to use the theories and indices correctly, whereas policy makers must be able to understand and interpret the conclusions offered by the academics.
The course has the following expectations and results:
- covering the theoretical and practical issues involved in biodiversity theory,
- conducting surveys and inventories of biodiversity,
- analyzing the information gathered,
- and applying their analysis to ecological and conservation problems.
Needed Learner Background:
- basics of Ecology and Calculus
- good understanding of English

從本節課中

Statistics applied to the analysis of biodiversity

The last module (n° 6) of this course will be dedicated to statistics applied to the analysis of biodiversity. We will see how to apply the information gathered in the previous modules to obtain a statistical significance. We will explore parametric and non-parametric tests, the useful chi-square test, the correct application of correlation and the regression analysis, and some hints about the multivariate analysis techniques, such as ANOVA.

Ph.D., Associate Professor in Ecology and Biodiversity Biological Diversity and Ecology Laboratory, Bio-Clim-Land Centre of Excellence, Biological Institute

[MUSIC]

Hi guys, welcome to the 28th lecture of the course Biological Diversity Theories,

Measures and Data Sampling Techniques.

Today I will talk to you about the fifth part of the statistic applied to

biological diversity.

Last time we saw parametric tests and how to use them.

This time we see how to use non-parametric tests and comparison between medians.

Non-parametric or independent from the distribution tests

are those tests that did not require special conditions to be applied, do not

need normal distributional data, should not be scaled or be in relationship.

Anyway, when distribution is similar to the normal distribution,

parametric meters are more efficient estimators.

These tests are particularly suited to compare various more samples.

One of the most used non-parametric tests is Mann-Whitney U test.

This test allows the analysis of ordinal data

to compare the medians of two independent samples.

To calculate the test, you need to proceed as follows.

First, establish the null hypothesis,

that the two samples come from the same population.

Then list all values your observation of both samples together, in ascending order

assigning each a rank from one to n, that is the total of the observation.

Then highlight only the values and

the ranks of the one of the two samples in the list.

Then add up separately the ranks of the values highlighted and

those of non highlighted values.

So the ranks of the first sample R1 and those of the second sample R2.

And then you need to calculate the U statistics for

both samples that is shown with the formula in this picture.

So you need then to verify that U1 plus U2 is equal to n1 multiplied by n2.

Then you need to choose the smallest of the two values of U and

compare it to the value in the table for the corresponding values of n1 and n2.

If the U value calculated is less than the critical value in the table,

you can reject the null hypothesis and

conclude that there is a significant difference between the medians.

Please pay attention to the fact that this is the only test [INAUDIBLE] for

paired data that to reject the null hypothesis must have good calculated

value lower than the critical one in table and not higher.

When there are samples with more than 20 sample units,

it is necessary to covert the smallest value of that in U.

Let us bring back the statistical task of the normal curve.

To do this, we add up the following formula.

If the calculated value of zed after the conversion of U exceed

the critical value of 1.96, the null hypothesis H0 Can be

rejected at P=0.05 that there is a significant difference.

If it also exceed the value at 2.58 the null hypothesis may be rejected at P=0.01,

and the difference between the medians of the two samples is highly significant.

Another useful non-parametric test when you want to compare two samples

is the Kolmogorov-Smirnov test.

The Kolmogorov-Smirnov test allows you the comparison of quantitative data, and

to calculate it you need to proceed as follows.

First of all,

you divide the data of each sample in frequency categories of equal width.

In the example that I show you in this table,

you can divide the heights in five meter, in five.

So it means that one category is 1, 5, the second one is 6, 10,

the third one is 11, 15 and the fourth one is 16, 20.

For each class you attribute the community frequencies of the first and

second sample.

So you add up how many values of each sample belong to each class.

So in the example, you will see that for class 1, 5 of the sample, A,

there are five values.

In the sample B, there are three.

So you look at the table and you understand how distribute this value.

Then you need to calculate the difference between the frequencies class by class

just to track the number of values A to B for each class.

Then consider the biggest difference in absolute value as D, so

in our example is three.

You find the appropriate table for

the distribution of test probability of the Kolmogorov-Smirnov test.

And if D, so the value you calculated exceed the critical value,

in the table at the chosen level of significance for P=0.05 or for

P=0.01, the difference between the samples is significant and

then the null hypothesis can be reject.

Please note that there are two tables for the Kolmogorov-Smirnov test.

One for samples with n less than 40.

So equal sampling units.

And one for samples with more than 40 and so, more than 40 sampling units.

With also different sampling units, so you need to choose the appropriate one.

The last non-parametric test I'll show you today is the Wilcoxon Test.

We can use this test when data are paired.

In reality, the two samples certainly belong to the same population, but

show different characteristics in the observed values.

For example, the diameter of the same tree measured what one year of distance or

the weight of the same bird, mark it and recapture it after migration.

So, to calculate this index, you need to proceed as follow.

First, for

each pair of data, the observation value A is subtracted to that of B.

Then in this case we obtained a difference that is called d.

We assigned ranks to d considering the absolute values ignoring the case

where d is equal to zero and calculating the average for equal ranks.

So we assign to each rank a plus or minus sign corresponding to the sign of d.

Of course not in absolute value.

Then we add up separately the absolute value so positive and negative ranks.

And we get our plus and our minus.

So the lesser of the two values is the index T of the Wilcoxon test.

So then we consult the probability distribution of the test table

at the value N, which corresponds to the total number of pairs,

excluding those with d is equal to 0.

And then, if our T value is less than or equal to the critical value, the null

hypothesis is reject, and the difference between the samples is significant.

This test can be used only if the total number of pairs excluding those with d is

equal to zero, is greater than or equal to six.

So today, I presented you non-parametric test that

can be used when you are not sure about the normality distribution data.

These tests are very useful in general cases when you have small samples.