0:05

Welcome back to Intuitive Introduction to Probability.

In the last lecture,

I gave you some intuitive definitions of how we can define probabilities.

There was the classical definition, the empirical probability definition, and

the subjective probability definition.

In this lecture, I now want to be a little more precise, and

give correct definitions of what a probability exactly is.

And we will return to those three definitions.

0:37

So, we need a little language to describe the probability setting.

There's first, a random experiment.

Sounds kind of strange, sounds like we're in a lab.

But in probability theory,

anything that has an uncertain outcome is called a random experiment.

That could be rolling a die, that could be pulling a card, this is a roulette table.

But it's also in the real world.

The weather next Monday, or if you think about the stock price of

your favorite company, the stock price next Tuesday.

We don't know those.

Those are uncertain outcomes, and therefore,

in our language, those are random experiments.

1:19

Now, what can happen in a random experiment?

Those are called basic outcomes.

In a die it's very easy.

One, two, three, four, five, six are the basic outcomes.

If you think about the exact weather or

the exact stock price, this is already a little bit more tricky.

And we will see examples as we go along.

So collection of all basic outcomes are called the sample space.

And I apologize, we need a little notation.

Here's our first notation in this class.

S for the sample space.

Now, when want to define probabilities, we need subsets,

some elements, perhaps not all of them of a sample space.

And those are called an event and again we need a little notation,

those are typically denoted by capital letters A, B, C and so on.

2:10

Now, having said this, we can define probability.

That's the chance or the likelihood that an uncertain event will occur.

That's a number that's between 0 and 1.

Some people prefer to use percentages, 0% to 100%.

Now here be careful.

A probability cannot be larger than 100%.

It cannot be 120%.

It can also not be negative, minus 10%.

Sometimes people confuse growth rates, which can be negative or

larger than 100%, with probabilities.

So please be careful.

Probabilities are numbers between 0 and 1, 0% and 100%.

And now, we can give precise definitions for the three probability concepts.

The classical probability concept rests on an important assumption,

all basic outcomes are equally likely.

So, for example, in a fair die that's given.

Every number, one, two, three, four, five, six is equally likely.

And then the probability of an event is the number of elements in your event

divided by the total number of basic outcomes in the sample space.

We'll see examples soon.

Now, as I say, in addition to a fair die,

that is true in a lottery, at a roulette table and so on.

Now, this key assumption that all the basic outcomes have to be equally

likely is often not satisfied when we talk about real world applications.

And therefore this beautiful definition,

while helpful when you play a card game with your buddies or

a dice game with some kids, is essentially useless for real-world applications.

That's why we need more definitions.

4:07

If you have data and you can derive proportions from historical data,

that's when we get to the empirical probability definition,

also called the relative frequency probability.

Here the idea is now that we have repeated trials of an experiment.

For example, I may look at the weather on a particular day for

the last 20 years and say, can I use some average there to make a prediction.

What's the probability it will rain, what's the probability it will be sunny?

Or if I think of stock prices in the stock market,

people in finance love to look at the last 250 trading days, and

say how many days did the stock go up, how many days did it go down?

And based on that, get empirical definitions.

So it's, the definition says how often did an event occur in

a series of trials divided by the number of trials.

And that's now the empirical probability definition also used in medicine and

the pharmaceutical industry, whenever we do drug testing.

5:16

Sometimes, things get even worse.

You don't have data, you have no clue, and

at that point we go to the subjective probability definition.

Very important, for example, if you have a new product development,

you bring out a brand new product, a really disruptive innovation.

You have no idea whether your customers like it or not.

Maybe you have some, what managers like to call gut feeling or experience.

And based on that experience,

you say, I think there's a 75% chance this will be a successful product.

5:51

And so in that case we talk about subjective probabilities.

Very important in managerial decision making and everyday decision making.

Now, basic probabilities have to satisfy some rules.

Now I have to give you three rules.

Looks a little mathematical, there's nothing to be understood here.

These are also called axioms.

Or, after a Russian mathematician who was the first to write these down in 1933,

also called Kolmogorov's Axioms.

6:34

Rule number one,

the probability of any outcome in the sample space P(S) must be 1.

Something must happen when we do our random experiment.

P(S) = 1.

Second rule, very intuitive, any probability is a number between 0 and 1.

Can't be larger than 1, cannot be negative.

Please keep that in mind.

And finally, the third, a little more complicated rule,

if you have two events that have no elements in common,

also called disjoint by mathematicians, then the probability that A or

B happens, or the probability that A union B happens,

equals the sum of the individual probabilities.

P(A) + P(B).

If that looks a little tricky already to you, let's look at an example and

let's go back to fair die.

7:25

What's the random experiment?

I'm rolling a fair die.

I don't know what's going to happen.

That's now my experiment.

What are the possible outcomes?

One, two, three, four, five, six and they together build the sample space.

7:38

Now I told you about events.

Events are subsets of S, and here I define,

I pick an event A, the even numbers 2, 4, 6.

And the event B, 1 and 5.

And now let's look at A and B and their probabilities and see how this works.

8:10

Since we believe it's a fair die, we can use a classical definition.

All six numbers are equally likely, and so I'm allowed to just divide.

So, clearly P(S), P of any number one through six, is equal to one.

Seven cannot happen.

Zero cannot happen.

Pi cannot happen.

Now, let's look at the probabilities of the two events.

A has three elements, 2, 4, 6.

3 out of 6 is 1/2 = 0.5 = 50%.

B only has two elements.

So probability of B?

2/6 = 1/3.

8:53

Now, A union B.

Hopefully you remember from your middle school math classes,

if I have two sets and I build the union, I take all the elements together.

So if I take A, {2, 4, 6}, with the union of {1, 5}, I get {1, 2, 4, 5, 6}.

Probability now of A union B, of either A happening or

B, is 5 divided by 6, 5 elements divided by 6.

Notice that those two events have no elements in common.

There's no number that's in A and in B.

So, they are disjoint, the intersection is the empty set.

And therefore now I can use my probability rule,

the probability of A union B is the sum of P(A) and P(B).

3/6 plus 2/6 is 5/6 and

guess what, that's exactly the right answer that we saw before.

Those are now the, I showed you now the three axioms, the fundamental rules.

From those, we can derive further rules,

some additional rules that are very, very helpful.

First, the complement rule.

10:06

What's a complement of a set?

A complement of an event A or a set A are all the elements in S that are not in A.

And not surprisingly, the complement rule says the probability that

the opposite of A happens is just one minus the probability that A happens.

10:26

And then we have addition rule, the general additional rule that always holds

even when A and B are not disjoint, when there is something in the intersection.

And then the rule gets a little more complicated.

Then the probability of A union B is P(A) + P(B), but

then now I need to subtract the probability of the intersection.

Because otherwise there would be some double counting.

Let's look again at our little example.

10:55

What's the opposite of an even number, 2, 4, 6?

You see, odd numbers.

The complement are the odd numbers, 1, 3, and 5.

The probability of an odd number is 1 minus the probability of the even numbers.

1- 1/2, bingo, is 1/2 again.

Now let's look at an event C, that has the elements 1, 2, 3, 4.

A union C now is 1, 2, 3, 4, 6.

Five numbers, five out of six.

So the probability should get 5/6.

Now if I use the rule, probability of A union B = P(A) + P(C),

and I add A is 3/6, C has a probability of 4/6.

I get 7/6.

That's not the correct probability because I'm double counting the numbers 2 and 4,

which are both in A and in C.

And therefore I need to subtract them out.

That's why we now have this general rule.

And bingo, I get again the right answer of 5/6.

To summarize this lecture, I gave you formal definitions of

the three probability concepts that we have.

Very important, familiarize yourself with them.

It's not always the classical probability that we learn as kids

as soon as we play a dice game or we play a card game.

There are more important definitions for real-world decision making.

The empirical probability definition and the subjective probability definition.

I showed you the fundamental rules, also called the axioms of probability.

And finally two derived rules, which are very helpful in applications,

and we will see them in action in the next couple lectures.

Thanks for your attention.

I look forward to seeing you back in the next lecture.