An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

Loading...

From the course by Johns Hopkins University

Statistics for Genomic Data Science

115 ratings

An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

From the lesson

Module 2

This week we will cover preprocessing, linear modeling, and batch effects.

- Jeff Leek, PhDAssociate Professor, Biostatistics

Bloomberg School of Public Health

In genomics it's often the case that you don't

Â want to fit just one regression model.

Â You want to fit many regression models, so

Â I'm going to show you a little bit about how to do that.

Â So I'm going to, again, set everything up just like I like it in terms of plotting,

Â and then I'm going to load the data set and the packages that we need.

Â So in this case we need these packages, in particular the Limma.

Â And edge packages are two packages that we'll be talking about a lot

Â during this lecture.

Â So, now I'm going to load the bottom lay data,

Â which is a data set that compares two different mouse trains.

Â So we're going to talk a little bit about how you fit those regression models, but

Â the first thing that I'm going to do is I'm going to basically remove the lowly

Â expressed genes, just like I have in the past.

Â I'm going to take the log transform and remove

Â those that aren't above a particular threshold, so I'm left with 1,000 genes.

Â So I want to fit 1,000 regression models.

Â So the first thing that you can do is you could fit this by

Â basically using the standard default lmfit function in base R.

Â So the thing I need to do is I need to build a model matrix.

Â It's going to be the same one for every case.

Â So this is a common example in genomics where you have

Â a common design matrix that's the same for every single gene,

Â basically because you're going to fit the exact same model for every single gene.

Â And so then you can fit all of those models with the lm.fit function and

Â you can just pass it the model that you just created, and

Â the expression set the expression data set that you have.

Â So the result is a set of coefficients and residuals and

Â effects from all of those regression models.

Â And so if I look at the coefficients from the first model fit,

Â you can actually see that the coefficients matrix is actually very big,

Â it's got many different models that fit, but

Â if you look at the first one, and then you compare that to

Â fitting the linear model to each data set individually.

Â Then you can see that you get about the same,

Â you get exactly the same coefficient estimates.

Â Just like you'd expect.

Â So this fits one model at a time.

Â This fits many models at a time.

Â So the nice thing about lmfit is it's much, much faster.

Â If you wanted to fit every single model with lm,

Â it would take a long time to do that model fitting.

Â And so, actually,

Â it turns out that you get basically the same thing from either one.

Â So, the lmfit is much faster when you're dealing with multiple regressions.

Â So then, the next thing that I'm going to do is I'm going to look at,

Â I can now look at the distributions.

Â Since I've hit many regression models I can look at the distribution

Â of coefficients, to see if I see anything interesting or funny about them.

Â So first I can look at the distribution of the coefficients for the intercept.

Â So you can see that they tend to be positive here, and

Â that's because we're modeling these sort of account data, and

Â then the second thing that you can do is you can model the string coefficients.

Â And so you can see those tend to be close to that centered around zero.

Â And so the further from zero they are,

Â the more likely there is to be an association with string.

Â The other thing that you can do, is you can go through using the same data set.

Â You can plot, from the fit, the residuals from the first model fit.

Â Or the residuals from the second model fit, and so forth.

Â So you can start looking at the, drill down into the different components of

Â the dataset just like you could with any individual model fit.

Â You can also fit adjusted models using lmfit and so

Â here I'm going to just create a model matrix.

Â That's an adjusted version of that, where I've adjusted for the lane number now.

Â So, I'm going to do that by adjusting for lane number.

Â Then, I can fit an adjusted model with Imfit,

Â to the adjusted model matrix and the same expression data.

Â Now it fits another set of regression models, and so

Â we can look at the coefficients again.

Â For the first model fit, and you can see that there's an intercept term and

Â the strain term, but now we've got all these adjustment variables as well.

Â It's a very fast way to fit multiple models in R.

Â If you want to do an approach that actually deals with

Â moderating the statistics, you can do that with Limma.

Â It also has a fast model fitting approach.

Â So here what we're going to do is we're going to use the limit package to fit

Â multiple regression models using the lmfit command.

Â So this lmfit is a little bit different.

Â It's not lm.fit it's lmFit and it gives you a slightly different result as well.

Â But you still get the coefficients and the residuals, but

Â here you get some other information that you're going to use,

Â potentially later on when doing shrunken version

Â of the test statistics that you would do without the model fitting like that.

Â And so here we can see the coefficients for the first model, now when you do lmfit

Â from the Limma package, the coefficients for the first model are in the first row.

Â Remember that the model fit coefficients for the lmfit package,

Â they were in the first column.

Â So this is the first column, versus the first row.

Â GSo that's a little bit different.

Â But otherwise it is basically the exact same thing as you would get,

Â except they're a little bit more shrunk towards each other.

Â So, the other thing that you can do is you can do this in the Edge package.

Â And this package can be useful if you're working with people who don't necessarily

Â have as good a working knowledge of model matrices and

Â linear algebra, because you could just pass it the data set.

Â And then you tell what the group variable is, what's the variable you care about.

Â And then you tell it what variables to adjust for,

Â in this case the lane variable.

Â So then it creates this edge study object,

Â which you can then fit the models to directly.

Â And then, if you look at that at.

Â It'll tell you, what models it fit, what are the coefficients and

Â everything like that.

Â You can then extract similarly the beta coefficients for

Â one particular set of models.

Â Here we're going to subtract or extract them using, it's an s4 object, which is

Â a little bit different, so you have to use an at sign instead of a dollar sign here.

Â But this will give you the coefficients for the first model fit.

Â And then similarly, you can get those same coefficients

Â from the Limma model fit, and you can

Â see that the one difference is that they're not necessarily in the same order.

Â So you can see, for example, this strain estimate here, is the last estimate here.

Â And then you get the adjustment covariates first, so

Â it tends to put the adjustment covariates first, before the other covariates.

Â That's because it's going to drop that group variable off

Â when doing model comparison later.

Â So that's three different ways that you can do fast regressions for

Â many regressions in R.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.