我们将介绍对临床试验进行系统综述和元分析的方法，课程内容包括如何构建一个能够回答的研究问题，确定收入和排除条件，寻找证据，提取数据，评价临床试验中偏倚的风险，并进行元分析。 顺利完成课程后，同学们将能够： -描述进行系统综述的步骤 -采用“受试者，干预措施，比较方法，结果”(PICO)框架来提出能够回答的问题 -描述从临床试验报告中收集和提取数据的过程 -描述辩证地分析临床试验偏倚风险的方法 -描述和解释元分析的结果

Loading...

來自 Johns Hopkins University 的課程

系统综述及元分析入门

834 個評分

我们将介绍对临床试验进行系统综述和元分析的方法，课程内容包括如何构建一个能够回答的研究问题，确定收入和排除条件，寻找证据，提取数据，评价临床试验中偏倚的风险，并进行元分析。 顺利完成课程后，同学们将能够： -描述进行系统综述的步骤 -采用“受试者，干预措施，比较方法，结果”(PICO)框架来提出能够回答的问题 -描述从临床试验报告中收集和提取数据的过程 -描述辩证地分析临床试验偏倚风险的方法 -描述和解释元分析的结果

從本節課中

Planning the Meta-Analysis and Statistical Methods

This module will cover the planning of your meat-analysis and the statistical methods for meta-analysis.

- Tianjing Li, MD, MHS, PHDAssistant Professor, Epidemiology

Bloomberg School of Public Health - Kay Dickersin, PhDProfessor, Epidemiology

Bloomberg School of Public Health

So we talked about the assumptions and

then conceptually how it different from a fixed effect model.

Now we're going to show you mathematically

how you're going to actually implement that.

Performing a random effects meta-analysis, your goal for the analysis is,

we start with the observed effects and try to estimate the population effect.

So I have repeated that idea over and over again.

The observed effects, regardless of model, they are the same.

That's the data you collected from the study.

And the goal is to use a collection of Yi, which could be risk ratio,

odds ratio, to estimate the overall mean mu.

And how we're going to do it?

Through meta-analysis.

The overall mean is calculated as a weighted average.

Again, it's just a weighted average.

And the tricky part for the random effects model is figuring out the weight.

Alright, remember we said under a fixed effect model,

the weight equals the inverse of the variance, which we're going to carry over

that idea over that idea to the random effects meta analysis.

And now the weight is still equal to the inverse of the variance.

But instead, the variance has two components.

One is the within study variance.

One is the between study variance.

So that's the difference.

The only difference is how you're going to weight each study.

The weight equals the inverse of the variance, but

the variance is modified by the between study variance.

So, we use the tau squared to modify the weights

used to calculate the summary estimate.

That's the difference between the random effects model and

the fixed effect model when you're doing it.

So, let's look at these.

Again, they're just a set of equations.

They look awfully similar to the equations you have seen previously.

However we have stars to denote the weight.

You start with the observed effects and

try to estimate the population effect through computing a weighted mean.

And the weight assigned to each study in a random effects meta-analysis equals to one

over the variance.

Now we have a little star here for

the weight as well as the variance, which is different from the fixed effect model.

Because we want to modify that variance.

Because now the variance has the weaning study variance for

each study I, plus the estimate of the between study variance tau squared.

So that's the difference.

So the red circle on this slide shows you the difference.

From this model to the fixed effect model.

So here the variance has two parts, the study variance,

plus that tau square, which is an estimate of the between study variance.

And all the other formulas and equations are the same.

You have seen them, but remember to plug in the correct weight, which is this star,

and that means you have accounted for the between study variance.

So the weighted mean equals the summation yi times wi,

divided by the summation of wi.

Exactly the same equation you have seen previously, but the Wi has been modified.

And you can do the variance, and take a square root of that,

you get a standard error.

And again, you can use the estimate and the standard error to get the lower and

upper limits for the confidence of limits for the summary effect.

You can do a z test to test the null hypothesis

that the center of the distribution mu is zero.

From there you can get the two sided p-value.

So everything on this slide, you have seen it previously.

And the only difference is a little star placed in each equation,

meaning here you have to account for the between-study variability.

Now let's talk about the Tau-squared.

So, the [INAUDIBLE] study variance, you will have that from each individual study.

So let's say you get the odds ratio and

the confidence interval from study one, right?

That confidence interval captures the wingding study variants.

And you should know how to get from the confidence interval to the standard arrow

for the log odds ratio.

You know you have that winging study variance.

So the tricky part, or the difficult part is to get the between study variance.

How we're going to estimate that between study variance.

There are different ways to do it and one of the most popular one in the literature

is called the DerSimonian Laird Method or the Method of Moment.

A different name but the same thing.

And here the tau squared, the between study variance,

equals to Q minus degrees of freedom, divided by C.

And again you will see a set of equations of how to get your Q and

how to get your C.

But if you reach the equations carefully, well we know all those numbers,

just a set of Wi and Yi.

As long as we have the numbers from the studies we can get the Q and C.

And you can plug in the numbers into this equation to get the Q and C.

And a degrees of freedom basically equals the number of study minus one.

Let's say you have six studies for

meta analysis, then the degrees of freedom would be five.

So that's it, you have to figure out a way to get your between study variance and

there similarly in late method is one of the most popular method to do it and

here are the equations to do it.

Just a caveat, if the number of studies is more,

then the estimate of the tau-squared will have poor precision.

What do I mean?

Well you're trying to guess the distribution with a set of studies.

But you have only three studies.

So your guess of that distribution won't be very precise, right?

So that's the idea.

Using the same example that we used under the fixed effect model,

here the example you have seen this data already.

Here we have data from six studies.

We have the treated group, the comparison group, and

the number of events and non events for each study.

Okay?

And based on those numbers, you can calculate the odds ratio and

the log odds ratio, which is the effect size, on the second table in this slide.

So we get the odds ratio.

We take the log of that number, we get the effect size.

And we have found the variance within studies,

based on the fixed effect model, right?

And here, we can take the inverse of that number, and get the weight.

And you can do all this calculation,again, not by hand hopefully but

using some statistical software or even excel spreadsheet.

Here, again you want to focus on the summation of those numbers

as that's what's required for your equation, okay?

So with all those numbers, we're going to take those numbers and

plot them into the formula to get our estimate of the tau square.

Okay.

On the left hand side of this slide,

those are the equations I showed previously to get your tau squared.

Tau squared equals Q minus degrees of freedom divided by C,

and there are a bunch of equations to get to that point.

But the point is, from the previous slide with the number,

we can plug in those numbers into these equations and try to get the tau squared.

And, we have those numbers from the previous slide.

So the tau squared for this particular meta-analysis equals 0.173.

What is tau squared?

Tau squared is an estimate of the between study variance.

Here we have six studies to estimate that tau square.

Now we have the between study variance, right, which is 0.173.

And there's only one between study variance cause you're using a bunch of

studies to calculate that tau square, so that's the column for

your between study variances, .173 for every single study.

And now you have the variance,

which is the column to the left of the circled column.

So you have the winning, and you have the variance between.

And now you can get the total variance.

So we have the winning study variance as well as the between study variance.

And the column on the right of the red circled

column is basically the total variance.

So that's the modified variance from that study and from there you get the weight.

You can calculate the quantity using W times your effect size.

And get all the numbers on the rest of this slide.

So it's easy.

I think as long as you understand there are two components for

your total variance, which is the variance, and the between study variance.

And everything else you can always look up the equations

in figuring out how to do it.

Fortunately, you don't have to do it by hand, and

the statistical software will do it for you.

But I want you to understand how these are derived.

And if you have a point of reference if you want to do it by hand, and

you will be able to go back to the equations.

Remember that the variance for each study is now the sum of the variance within

studies plus the variance between studies, okay?

Again I just want to repeat the most important

difference between the random effects model and the fixed effect model and

the weight assigned to study one in the random effects meta analysis.

Now equals the variance within plus the variance between.

So the number from the previous slide is 0.185.

That's the variance within Study one.

And the between study variance, tau squared, the estimate of that, is 0.173.

Okay so now you're going to add up those two numbers, and

using 1 divided by that number you get the weight from the study 1.

So the new weight or the modified weight assigned to study

1 under a random effects model, is 2.793.

That's the weight.

So, going back to the slide, that's how we got the weight for the first study.

And you can do the same for the second.

All the way to the sixth study.

And that's how you get the modified weight for each individual study.

Okay, now it's easy.

You have seen this multiple times by now.

And the meta analysis, the meta analytical result,

the pulled estimate, basically is just a weighted mean.

And you're going to take the Yi, and

use that time your modified weight with the star.

Remember, the star is the difference between the two models.

And you're going to sum them up across all studies.

And that's how you get the weighted mean.

And again, you can plug in the number and get the variance.

Get the standard arrow, and then you can derive your 95% confidence interval for

your weighted mean.

So here I did it.

You can always go back and

try to see if you can get the same results by plugging those numbers.

The odds ratio under the random effects model.

The point odds ratio is .568 and the lower confidence limit is .355 and

the upper confidence limit is .907.

So what is a random effects model?

Here are some of the summary points that you're going to take away from

this lecture.

Under the random effects model, we assume the true effects in the studies

have been sampled from a distribution of true effects.

So basically, the idea going back to the slide, where whether the circles from

the studies would coincide or line on top of each other or there's a distribution.

That's the difference between the two models.

When they're identical, lying on top of each other, that's the fixed effect model.

When you're assuming there have been sampled from a distribution of true

effects, then that's a random effects model.

And the summary effect in our estimate of the mean of all relevant true effects,

and we can test a null hypothesis which is the mean of these effects is 0,

for a difference or whether that's 1 for ratio.

And the confidence interval for the random effects estimate indicates our uncertainty

about the location of the center of the random effects distribution.

Not its.

So this is a difficult concept to grasp at a glance, but the idea is the confidence

interval from your random effects model tells you how uncertain or

how certain you are about the location of the center of that mean distribution.

And our goal is to estimate the mean of the distribution,

taking into account two sources of variance, the within, and the between.

And here I can trust the output

from the fixed effect model versus the random effects model.

So on top of the first figure shows you the results from the fixed effect model,

and the port odds ratio is 0.5485 with a 95% confidence interval.

And if you remember the force study,

the lane study takes up 41% of the weight based on a fixed effect model.

And if you look a the same study,

on the random effects model it takes up only 25% of the weight.

So the random effects model basically assume all these studies

have a distribution of effects so

you're adding another source of variance to each individual study.

So the idea is the random effects model are pulling study or

shrinking the estimate together.

So the random effects model is giving a little bit of more weight to smaller study

or cut a little bit of weight from the larger study, so

it's basically pulling them towards the center.

So under the random effects model, the poll estimated.

Is 0.568 and you have an estimate of the confidence interval.

And I want to go back to the basic idea.

So the basic idea is what?

We have a bunch of studies and in this example six of them.

And you have odds ratio from each individual study.

So if you look at the second column from both figures

those are the odds ratio you get from individual study.

Those are the data you collected as part of your data abstraction for

your systematic review.

Regardless of which model you're going to use,

those numbers remain the same, because that's the data from what you observed.

And the only differences here is, now we have the observed data,

we're trying to guess or we're trying to estimate where that diamond is.

So the diamond is pulled estimate.

We have two diamonds, one under the fixed effect model and

one under the random effects model.

But by using these two different models, the diamond one lies slightly differently

on the plot, and it will have a smaller, a little bit wider confidence interval.

As you can imagine,

on the random effects model, You're less certain about the center of the diamond

because now you're saying we have two source of variation.

One is the within and one is between.

So your data you have has been partitioned

to estimate those two sources of variation.

That's why it's a little bit wider than the fixed effects model.

So again you're using what you observed to figure out where the diamond is.

And you can do it through making two different assumptions.

One is all the studies are identical.

That's why I'm getting a very precise estimate under the fixed effect.

Or you are saying the studies are slightly different from one another.

That's usually the case.

That's why you're getting a less precise estimate under the random effects model.

So, I have set these multiple times.

Under the fixed effect model,

you assume that the true effect is the same in each study.

And that the only reason for

variation in estimates between studies is sampling error.

Going back to the previous example,

all the differences in odds ratio you're seeing from the previous example

under a fixed effect model is because of the sampling error.

However, under the random effects model, the model is trying to estimate

a main effect about which it is assumed that the truest study effect varies.

So again, the idea is you assume that all these studies are estimating an effect but

there's a variation between study.

We have covered in this lecture what is a fixed effect model and

what is a random effects model.

Those are the two models you're going to see in the literature

when people are trying to put studies into a meta-analysis.

We haven't talked about which model to use, whether a fixed or random.

And we haven't really talked about how to quantify the amount of

statistical heterogeneity.

We said, based on your qualitative analysis,

you're going to see the studies are slightly different from one another.

How can we quantify that statistically?

Can we come up with a number to say how much different they are different?

And we're going to talk about how to explore the sources of heterogeneity, for

example, through using meta regression and sub group analysis.

We will cover these topics in upcoming lectures.

And thank you for today, we have covered a lot and

I hope you have learned a lot and we will be here to address your questions.

[MUSIC]