我们将介绍对临床试验进行系统综述和元分析的方法，课程内容包括如何构建一个能够回答的研究问题，确定收入和排除条件，寻找证据，提取数据，评价临床试验中偏倚的风险，并进行元分析。 顺利完成课程后，同学们将能够： -描述进行系统综述的步骤 -采用“受试者，干预措施，比较方法，结果”(PICO)框架来提出能够回答的问题 -描述从临床试验报告中收集和提取数据的过程 -描述辩证地分析临床试验偏倚风险的方法 -描述和解释元分析的结果

Loading...

來自 Johns Hopkins University 的課程

系统综述及元分析入门

1096 個評分

我们将介绍对临床试验进行系统综述和元分析的方法，课程内容包括如何构建一个能够回答的研究问题，确定收入和排除条件，寻找证据，提取数据，评价临床试验中偏倚的风险，并进行元分析。 顺利完成课程后，同学们将能够： -描述进行系统综述的步骤 -采用“受试者，干预措施，比较方法，结果”(PICO)框架来提出能够回答的问题 -描述从临床试验报告中收集和提取数据的过程 -描述辩证地分析临床试验偏倚风险的方法 -描述和解释元分析的结果

從本節課中

Introduction

To get the ball rolling, we'll take a broad overview of what to expect in this course and then introduce you to the high-level concepts of systematic review and meta-analysis and take a look at who produces and uses systematic reviews.

- Tianjing Li, MD, MHS, PHDAssistant Professor, Epidemiology

Bloomberg School of Public Health - Kay Dickersin, PhDProfessor, Epidemiology

Bloomberg School of Public Health

So meta-analysis, a lot of people equate it with systematic reviews, but

it's actually different.

Not all systematic reviews would have a meta-analysis,

and meta-analysis is really a component of a systematic review

where you have enough data to combine them statistically.

And here are two classic definitions for systematic review.

It is the statistical analysis of a large collection of analysis results from

individual studies for the purpose of integrating the findings.

Or alternatively, a statistical analysis which combines the results of

several independent studies considered by the analyst to be combineable.

Personally, I like the second definition better because you have to decide,

as a systematic reviewer, whether the studies are similar enough,

such that you can combine them in your meta-analysis.

Most meta-analysis are presented in forest plot.

And here is one example of a forest plot.

Here we have five studies.

And each line and the square in the center represent the results from one study.

And the size of the square,

is proportional to the weight that each study is taken in the meta-analysis.

The larger the square, the more weight the study is taking.

And we also have the two sticks around the square which shows you

the confidence interval for each study.

If you look down on the plot, you will see a diamond, a blue diamond.

That's where the meta-analytical effect lies and where the point estimate is, and

the confidence interval for the meta-analysis.

Depending on the measure of association you're going to use,

you may have different scale on the xx.

For example, here we're using risk ratio.

A risk ratio of one is the now effect.

And you can label your figure such that, if the diamond

lies on the left of the line of no effect, it favors the treatment.

Or if the diamond lies to the right-hand side of the line of

no effect it favors the control.

So you can actually show the direction of effect on the same plot.

This is called a forest plot, and it shows you the meta-analysis results,

as well as the results from individual studies you put into your meta-analysis.

Meta-analysis provides us statistical methods for

answering what is the direction of effect or association?

What is the size of effect?

And is the effect consistent across studies?

You may also want to ask the question, what is the strength of evidence for

the effect.

Assessment of the strength of evidence relies additionally on the judgement of

the study quality, study design, as well as the statistical measure of uncertainty.

Again, a general framework for synthesis or for your meta-analysis

is that you want to answer the question of what is the direction of effect?

What is the size of effect?

And whether the effects are consistent across studies.

What Meta-Analysis Can Help you to Do?

If you have several studies included in your systematic review, and

they're similar enough, when you put them together in a meta-analysis,

you can determine whether an effect exists in a particular direction.

It helps you to combine the results quantitatively and

obtain a single summary result which is shown as a diamond on the forest plot.

You can also use systematic reviews to invest heterogeneity,

to examine reasons for different results among studies.

Again, very unlikely you will get identical results from studies on

a research question.

But you will be able to look at why they are different using meta-analysis and

related methods.

As I mentioned earlier, I really like the definition where meta-analysis is defined

as by the analysts of whether the studies are combinable, and here are why.

The justifications for combining results.

You have to decide whether studies are estimating in whole, or

in part a common effect.

This is very important, because as I said,

very unlikely you will have two identical studies.

They are similar in some way, and you have to decide if they are similar enough.

When you decide to combine the studies together,

hopefully the studies are addressing the same fundamental, biological, clinical or

mechanistic question.

Let's look at one example.

The research question is what is the effect of interferon therapy in

Hepatitis C?

In the size of the effect might be higher or

lower when the participants are older, more educated or healthier than others.

So for some of your studies, the studies

might be conducted in older participants while others in younger participants.

Some conducted in North America, and others in Africa.

There are different forms of interferon, as well as different concentrations,

dosages, and usages.

And there are also different viral sub types of Hepatitis C.

So when you have a handful of studies that address this question,

what is the effect of interferon therapy in Hepatitis C,

there will be characteristics that the studies differ from each other.

There could be different designs, different participant characteristics,

different intervention characteristics, and even different outcomes and

you have to decide whether these studies are addressing the same question.

When we decide to incorporate a group of studies in the meta-analysis,

we assume that the studies have enough in common

that it makes sense to synthesize information.

When it doesn't make sense to synthesize information,

you don't have to do a meta analysis.

So when to do a meta analysis?

When more than one study has estimated a treatment effect or association.

That means when you have two or

more studies that have estimated the same treatment effect or association.

And when the differences in study characteristics are unlikely to affect

the treatment effect.

Again, as I said, if the study characteristics, let's say,

studies conducted in older participants are likely to affect the treatment

effect ,saying that the treatment is less effective in older adults.

Then perhaps it's not a good idea to combine the studies done

in older adults with the studies done in younger adults.

When the treatment effect has been measured and

reported in similar ways, that says that when you have data available,

that's when you could do a meta-analysis.

There are also cases that you do not want to do a meta-analysis.

For example,

if the information presented in individual study is really not that great.

So a meta-analysis is only as good as the studies in it.

You may get a very precise estimate by combining all the studies together.

But the combined estimate is worse than the bias study on it's own.

You have to be aware of reporting biases.

That means the published information may be different from what's all out there,

including those unpublished information.

You don't want to mix apples with oranges, although it's not useful for

learning about apples, it's useful for learning about fruit.

Again the studies must address the same question, though the question can and

usually must be broader.

So when you do a meta-analysis on a set of studies,

unlikely that all the studies will be identical.

They will be similar.

And as a data analyst, you have to decide how similar is similar, and

how different is different, and when to do a meta-analysis.

I'm going to end this section by showing you a very good example

of the importance of synthesizing what we know in an ongoing fashion.

And this is a classic example.

And there are many more examples that like this.

And this is is a cumulative meta-analysis.

So here, each line and the circle on the plot

is no longer representing the results from one single study.

Each line in the circle represents a meta-analysis.

And here we have a series of meta analysis and they were done cumulatively.

And this show us how important it is we synthesize what we know in

ongoing fashion.

And the topic is now thrombolytic therapy in preventing death,

in people who have already had a heart attack.

And the way you're going to read this plot, or the accumulated meta-analysis,

is like we're adding or throwing a study into the pot.

If we had been keeping track of the information,

what the meta-analysis results would show, by each time point.

So let's take a look.

The first study on the thrombolytic therapy in preventing death

in people who already had a heart attack, was done in the early 1960s.

And the first randomized control trial included 23 participants.

It has a point estimate around 0.5.

Odds ratio 0.5, favors the treatment.

Although the confidence interval is very wide, crossing the null value of one.

And a second analysis was done, again in the early 1960s, so

the second study was added to the first study.

So combining the two studies together, there were 65 participants.

As you can see, the point estimate stays about the same,

although the confidence interval gets tighter.

So, people kept doing the same experiment or

randomized control trials over and over again.

By the early 1970s, by the time we had ten randomized control trials,

2544 participants were randomized in those ten trials.

It shows that thrombolytic therapy was effective.

And the confidence interval,

the upper bound of the confidence interval, does not cross the null value.

And the P value is less than 0.1.

So by the early 1970s, if we were keeping track of the evidence,

we would have known that thrombolytic therapy is effective.

Yet, no one were keeping track of the information or the evidence.

Investigators kept randomizing patients to thrombolytic therapy versus placebo,

or nothing.

And look how many more patients were randomized.

So by the 1990s, 70 randomized control trials were done

with over 48,000 participants.

If we were keeping track of the information,

we would have known the answer by the early of the 70s.

Yet we randomized another 45,000 participants,

telling patients, well, we don't know whether thrombolytic therapy is effective.

That's why we're randomizing you to thrombolytic therapy or placebo.

Imagining the patients who are randomized to placebo therapy, those patients were

wasted, because we would have known the answer as early as the 70's.

So if you look at all of these estimates,

the effect size stays the same, it's just getting more precise.

It wasn't until meta-analysis was done that thrombotic

therapy was mentioned beneficially in a textbook.

So on the right-hand side of this slide you see a grade which shows that

whether the textbook or review articles recommended thrombolytic therapy.

It was not until med analysis was done

that thrombolytic therapy was mentioned in the textbook.

So this example shows you how behind we were,

in terms of not keeping track of the evidence, and the potential

harm that could happen to our patients if we were not keeping track of our evidence.

Where does systematic review, or

evidence synthesis fit in in the knowledge translation process?

We have the evidence generation,

which is the clinical trial observational of studies.

Those are the primary research.

And then systematic reviews summarize those information.

And the systematic reviews, are then feeding to clinical policies.

For example, into the practice guidelines.

When the evidence is integrated with clinical expertise,

and patient value, that's how we practice evidence based health care.

We talked about what is a systematic review in meta-analysis.

And by using a lot of examples, I hope I have convinced you that it's important

to synthesize information in an ongoing fashion.

It is also important that the way your synthesize information follows

a prespecified methods, and the methods are transparent, are reproducible and

will help you to reach valued conclusions.