0:50

The first condition, the quadratic function of a vector x,

here I'm speaking about multivariate characteristic functions.

It is the mathematical expectation exponent in the power i and

then I shall try the scalar product of

the deterministic vector u and a random vector x.

So this mathematical expectation

is of the following form.

Its exponent, In the power i

multiplied by the scalar product of u and mu,

2:54

mu is exactly the same vector

as the previous item.

And as for X0, X0 is a standard normal vector in

the sense that all of these components are independent and

all of them have a standard normal distribution.

Normal distribution with parameters 0 and 1.

Before I will prove the theorem, let me just remark

about the objects which you see in the formulation of the theorem.

My means is vector mu and matrices C and A.

In fact from the proof we'll show that vector mu is

a vector mathematical expectation.

That is, it is equal to mathematical expectation of x1 and

so on, mathematical expectation of xn.

As for the matrix C which appears also in the first item,

this is a covariance matrix.

If I denote the elements of this matrix by Cjk

where indices j and k run strand from 1 to n,

then this Cjk are the covariances between Xj and Xk.

4:25

Well, this matrix is of course symmetric because covariance between Xj and

Xk is equal to the covariance between Xk and Xj.

And it is also positive semi-definite,

just recall that this property

means that the sum of UK, Ckj, Uj.

Sum by JK and J from 1 to n shall be

non-negative for any u1 and so

on, un from rn.I n other words,

we can write this in a more compact way.

So if you multiply a matrix C by vector u,

transport to the left, and by u to the right,

this object should be non-negative for nu.

There is some confusion in the notations so actually a positive definite and

positive semi-definite, these two terms are sometimes mixed.

For instance, you can see in the literature that

positive-definite is exactly a matrix with this property.

Or sometimes there is a distinction between positive-definite,

positive semi-definite.

Definite means that this equals here, this full field was strong sign, will

all be larger and positive semi-definite exactly is in our definition.

So to avoid any confusion during this lecture,

I will mean by positive semi-definite exactly this property.

It's easy to prove that the matrix C,

covariance matrix satisfies this assumption.

In fact, what is written in the left-hand side is a sum by kj from 1 to n.

Uk covariance is between Xj and

Xk, Uj, and this equal to

the covariance between the sum of

by j from 1 to n uj, sj and

sum t from 1 to n UK, XK.

And you see that actually these two variables are completely the same.

All the index of summation differs but they are the same.

So this is nothing more than the variance of one of this OUJ from 1 to n Uj Sj, and

you know that a variance is a non-negative function.

Therefore, this matrix C is a positive semi-definite,

and basically this matrix is used in item 1.

Let me now continue.

Now what about matrix A?

This matrix appears as a second item of the theorem.

So A is actually in the matrix C in the power ½, what does this notation mean?

It means that this is a matrix,

says that if you multiply A by itself, you will get C.

7:56

Well, you know that C is a positive semi-definite matrix.

Therefore, there exists octagonal matrix U,

the matrix of a change of.

So this matrix has the properties that reverse to U is equal to U transposed.

Such that matrix C is equal to U transposed

multiplied by the diagonal matrix d1 and

so on, dn, And multiplied by U.

8:59

This matrix has exactly this property,

if you multiply A by A you will get exactly matrix C.

And this matrix is actually also symmetric.

Therefore C, in this case, also = AA transposed.

So now we know what this object means, so

we have the exact expression of mu of the matrix C and of the matrix A.

Let me know prove these facts, I mean the fact

that our definition is equivalent to the first and

to the second items of the theorem.

So the scheme of the proof for the following one.

I will firstly show that definition is equivalent to the item 1,

the first item of the theorem,

and then I will show that items 1 and 2 are equivalent.

Let me start with the first part.

So let me firstly show first from the definition, so

the characteristic function is exactly of this form.

10:17

Well, this statement is in fact not so difficult,

and to show this, let me first mention.

That since we assumed that x is Gaussian by definition,

this scalar product of u and X has a normal distribution.

In fact, this scalar product is nothing more than some linear combination of

the concurrence of vector x.

And therefore, according to the definition,

it should have a normal distribution.

Therefore, what we have here,

this characteristic function of a vector X at u,

this object can be considered as a characteristic

function of a random variable, xi,

which is scalar product of u and x at 0.1.

So this is nothing more than the characteristic function of xi at .1.

11:24

I mentioned it in the beginning of this lecture,

the characteristic function of a normal random variable is of the form exponent i.

Then we shall put parameter mu of this random variable,

xi and xi, and then I shall put U, and U = 1 in this situation.

So minus, and here I shall write ½ sigma xi squared,

and multiply it by the parameter 1, by the argument 1,

and also square root multiplied to 1.

So to prove this item, it is sufficient to find the parameters mu and

sigma for the random variable XI, and let us do this now.

So what do we know about mu xi?

12:22

This is exactly mathematical expectation of the sum Uk,

Xk, where k runs from 1 to n.

You know that the mathematical expectation is a linear function,

therefore it is equal to the sum from k to 1 to n, Uk, and

here I shall write mathematical expectation of Xk.

As you know, mu is exactly equal to this,

to the vector of mathematical expectations.

Therefore, I can simply write mu k here, and

conclude that this is a scalar product of u and mu.

13:07

Now what about sigma xi squared?

This is actually a variance of random variable xi, and

let me write this variance as a covariance between xi and xi.

That is, a covariance between

the sum Uk Xk and the sum Uj Xj.

So here, k runs from 1 to n, here j from 1 to n.

14:04

According to our notation this is another meant

as a covariance matrix c, and therefore,

the sum is equal to the product of u transposed Cu.

If you now substitute these expressions for sigma and

mu into this formula, namely this expression instead of mu.

And this expression instead of sigma,

you will get exactly the statement of item 1.

So you will get the characteristic function of X

is equal exactly to this formula.

14:46

As for the inverse, so why it falls from 1 to the definition [INAUDIBLE].

Actually, there is nothing to prove because there is one to one correspondence

between the distributions and their characteristic functions.

Therefore, if we know that for Gaussian vectors,

characteristic function is of these four.

Then with no doubt, if you know that the characteristic function is of this form,

then vector is Gaussian, so there is nothing to prove.

15:33

Actually, x0 is a standard normal vector and therefore,

this vector is Gaussian because any linear combination of independent,

normal distributed random variables has also normal distribution.

So vector x0 is Gaussian, And

therefore, we can use what we have already proven.

So its characteristic function is equal to, This expression.

Here mu stands for

the vector of mathematical expectations of the components of FX0.

So all components are standard normal and therefore this vector also equal to 0.

As for the matrix C, it's a covariance matrix,

so all elements are independent and

therefore outside the main diagonal or elements equal to 0.

And what is on the diagonal are units because of

variances of all components are equal to 1.

16:39

So what we have here, so the characteristic

function is actually equal to the exponent

in the power minus ½ U transpose U.

Now let me mention that if we now consider characteristic function of vector X.

It is closely related to the characteristic function

of X0 because what is characteristic function of x is

the mathematical expectation of exponent in the power IU.

And now instead of x,

I will write this formula ax0 + mu.

18:08

And now we will substitute our expression for

the characteristic function of x0 into this formula.

What we'll get is the exponent

in the power iu mu multiplied by

exponent in the power minus ½,

who transposed A, A transposed U.

And if we now denote AA transposed by matrix C we'll

definitely get the characteristic function is of this form.

C is symmetric because C transpose = C, and

this is also positive semi-definite because all matrices which

can represent this AA transposed are positive semi-definite.

Finally, let me mention that the opposite statement

says the second i that follows from the first was completely proven.

Because if now denoted by A the matrix C and the power ½,

it's easy to see that the characteristic function

of this object will be exactly in this form.

So this theorem is completely proven.

And let me now show how the theorem helps us

to answer on some very interesting mathematical questions.

[MUSIC]