0:09

Hi! In this video, we will discuss how to

carry out a matched propensity score analysis in R.

This one involves several steps including how to fit a propensity score model in R,

how to actually match on

the propensity score and then how to analyze the data after matching.

For the example data,

we will use the right heart catheterization data,

which is publicly available.

So this involves ICU patients in five hospitals.

The treatment is right heart catheterization,

which we will call RHC.

The outcome is death, yes or no.

The confounders involve demographics,

disease diagnoses and so on.

And there's a couple thousand subjects in each group and

we've included on the website the actual article that we will discuss here.

So, you would begin by loading the packages that you want to use,

reading in the data and viewing the data and in this analysis,

we'll use both the tableone package and the MatchIt package.

The data itself has a lot of character variables.

So this step just involves converting them to

numeric variables and then we'll create the data set that we'll use.

So this is what I'm calling mydata,

is a collection of covariates along with treatment and outcome.

So first, we'll begin by fitting the propensities for model itself.

So, let's imagine that we're interested in logistic regression,

propensities for a kind of model.

So, we can use the glm function here,

where treatment is actually our outcome.

So on a propensity score model,

treatment is the outcome and then we list all the covariates that we want a control for.

We tell it family equal binomial.

So this is telling R that our outcome is binary and by default,

it's going to use a logit link,

so it's going to carry out a logistic regression.

And I tell it to use my data.

And, you might be interested in just looking at some summary statistics from the model.

So summary (ps model),

what that will do is,

show us coefficients and P-values and so on.

And this would just give you an idea about which variables were predictive of treatment.

So this might be of

interest to have a better sense of who actually receives the treatment?

What is different about them?

It also can be used as a check to make sure

that what you're doing is sort of sensible, has face validity,

that there's not any coding errors for example,

because you might have prior knowledge that certain variables will likely be

associated with a much higher probability of

treatment and you would want to kind of confirm that that's the case.

To actually create the propensity score,

there is one additional step.

So we fitted the model when I call that PS model.

Now, if we want to actually create the propensity score itself,

I just say, I am naming it pscore.

Then, I'm saying use the model that we just fitted and then extract the fitted values,

which are just the predictive values.

So this is all you actually have to do to get a propensity score for each person.

And here's an output from this particular propensity score model.

So, this is just showing you the relationship

between these covariates and the probability of treatment.

So as an example,

if you look here for the variable female so in this case,

the coefficient is negative.

So women are less likely to be treated than men and significant P-value and so on.

So this is just something you might be interested to sort of understand

better who is more likely to get treated?

And now, we can look at

the actual distribution of the propensity score and this is pre-matching.

So before we have matched, we have calculated the propensity score and now

we'll look at the probability of treatment.

And what we'll see here is the main thing here we're looking for is overlap.

What we see is that there's a lot of overlap between these two distributions.

And, so that's really the main thing,

is that this everything looks good from this plot.

We don't see any cases where at the tails of the propensity score,

either everyone's treated or no one's treated.

There seems to be overlap everywhere.

So now, we'll use this package called MatchIt.

So this is one way that you can carry out propensity score matching

it's with this package.

So our output is going to be called m.out.

We're going to use MatchIt.

And what we're doing here is,

with Matchit you actually wouldn't have had to have fit a propensity score model first,

I just wanted to sort of illustrate how you would do it if you want to

carry it out directly,

but in MatchIt they'll actually do that for you.

So, you begin by just giving it the formula which is

your outcome which is in a propensity score model as treatment,

then you list all of your variables.

You tell what your dataset and what I'm saying

here is the method means what matching method you want to use?

So I'm using nearest neighbor which is greedy matching,

you could also put optimal there but you would only want to use

optimal if you have a reasonably sized dataset.

So, what this is going to do is basically going to calculate

the propensity score and then do matching.

And I'm going to summarize the output and I'm also going to create two plots.

So, this MatchIt package makes it very easy to create some plots to look at ballots.

So I previously just calculated the propensity score directly and created my own plot.

But if you use a MatchIt package,

it has some built- in features that allow you to do that rather easily.

I picked two particular types of plots,

jitter and histogram and I'll show you what those are.

So, this is the jitter plot.

I think this is quite useful in general.

So, you'll notice it's broken up

into four different sections so we have unmatched treated,

we have match treated,

match controls and unmatched controls.

And one thing you'll notice is that there actually are no unmatched treated.

So that's just empty so everybody in the treated group

was matched to somebody in a control group.

Now, this part here.

These are the actual treatment and control subjects who were matched.

And what you want to look at there is just make

sure that these distributions look similar.

So this is again, propensity score distribution

and these should look very similar because now we've matched.

And in fact, I think they do look quite similar if

you just if you look at these plots they're sort of,

they are dense here in the middle and it's sort

of sparse out and tails and they look quite similar.

And this of the southern part here is quite interesting.

This- these are people in their controls who there there were no matches for.

Right? So if you look, there's this one person out here

who there was there was just there was

nobody in the treated group who was a good match for them.

So there was this treated person had a fairly high propensity score

and it seemed they could have conceivably got matched to that person but they didn't,

they they got matched to somebody over here.

But what you see is a bulk of the people in their control group who didn't

get matched were over here.

So there are people with low values in the propensity score.

And so, this makes a lot of sense because the propensity score distribution for

controls was sort of skewed in that direction

where they tended to have smaller values of propensity score.

So once you match,

you're going to exclude a lot of or end up excluding a lot of the people on that tail.

Because that was over represented, essentially.

You can think of controls who have

small propensity scores has been over represented and therefore,

when we match a lot of them get left out.

So this is very useful for telling us who got included in the match and who was excluded.

And this is also some- this is the histogram output.

That's an automatic feature for MatchIt.

And this is, raw here means before matching,

you can look at the propensity score distributions and you'll see that for the controls,

there's sort of more weight out here you know,

compared to untreated, which is what we'd expect.

On the matching side,

we look for the propensity score distribution to see that's the shape of these

are a lot more similar than prior to matching.

And I should note that,

I just use the default plot but you'll notice that this axis here

goes up to two whereas this one goes to three and they would

look more alike if the one over here went to three.

So that's why they would probably appear a little more different than they actually are.

So, if I was going to create the plot,

a publishable version of the plot I would have changed the access there.

But I want to illustrate what sort of the automatic kind of output looks

like and how easy it is to get useful information from a package like this.

So, I want to also illustrate how you can match with and without a caliper.

So, now I'm going to illustrate doing greedy matching,

where we take a logit transformation of

the propensity score and I'll do it with and without a caliper.

And, this time I'm going to use the package match rather than MatchIt.

And the main reason for that is,

just to illustrate how you can do this with different packages.

So you can see that a lot of these packages are fairly easy to use.

So, what- for the for the match package I just tell it what the treatment is,

I tell it we're doing one to one matching and now in this case,

it's not going to calculate the propensity score for me,

I'm going to have to tell it what the propensity score is.

So I say, match on logit of the propensity score.

So P score is something I've already created and I say take logit transformation.

So it's just going to do the logit transformation of the propensities score.

Replace = false means,

that I'm not going to allow people to be re-matched,

once their matched, they're removed from the dataset.

And now, what I'm going to do is so it's propensity score matching,

I haven't told that a caliper,

so it's not going to use a caliper.

And then I'm going to create a table one using the table one package.

So this is matching on module to the propensity score without a caliper,

and you'll see that we've matched 2184 controls and 2184 treated subjects.

And we can look at standardized differences and we see that some of

the standardized differences are a little bigger than what we would like.

Some of them are greater than point one (0.1).

They're not bad but they are a little greater than point one.

So, at this point you might be not totally satisfied with matching,

because we would really like all of these to be less than 0.1 So,

but just one additional step we could also carry out

an outcome analysis on that matched data set.

And if you did that, you'd get a point estimate and

a confidence interval using a paired T-test.

But as I mentioned, maybe we're not happy with that previous match because you know,

some of those standardized differences were a little larger than we would like.

So, here I'm going to use a caliper.

So I'll use I'm going to use a match package again.

Am going to tell it to match on the logit propensity score.

But now, I'm saying use the caliper.

And the important thing to note here is that,

this value 0.2, this means 0.2 standard deviation units.

And so, because I told it to match on the logit propensity score,

a 0.2 means 0.2 times the standard deviation of logit of the propensity score.

So it's- the caliper isnt't actually 0.2,

it's 0.2 times standard deviation of logit to the propensity score.

All the other steps are the same.

So this is... using calipers relatively straightforward, you just tell it the value.

And now what we see is that on our table,

the first thing you might notice is that there's fewer subjects who re-matched.

So, we have 1900 matched pairs.

So that means there were some subjects that we were not

able to match and that's because of the caliper.

So previously, we matched them because we didn't use the caliper.

But now that we've used the caliper, we identified, "oh,

some of these people that we were going to match were not

now going to match because their distance was too large."

It was larger than we were willing to tolerate.

So these should be better matches now because we are forcing good matches in a sense,

we're not allowing bad matches.

And so what you see now is if you look at the standardized differences,

everything is less than 0.1.

So just by having this caliper and preventing a small number,

of relatively small number of matches,

we're able to have much better balance here.

So this is what the caliper should accomplish.

You'll notice that there's better matches meaning,

standardized differences are smaller.

But, the trade-off is there's fewer matched pairs.

So our dataset is smaller.

So, we would expect to lose a little bit of efficiency by having a smaller sample size,

but we should have less bias because we have better matches.

We could then again, carry out a outcome analysis,

I'm just doing a simple paired T-test point estimate 95% confidence interval, P-value.

And we can summarize that without the caliper,

we found 2184 matched pairs and this was

the estimated causal effect and confidence interval and without the caliper,

we had 1900 matched pairs and this was the risk difference in confidence.

So in this case, the actual outcome analysis looks pretty similar.

The general conclusion will be the same,

it didn't make- it didn't end up making a big difference on the outcome analysis.

So this is- but this is something that you could do is you know,

change the caliper to enforce better matching and then see if your conclusions change.

So, here we see that even when we get

a little pickier about who we match and end up match fewer people,

the conclusion basically would stay the same.