In this video, we'll talk more about how to estimate those response propensities for non-response adjustment. Now, one general procedure is this. We can do a binary regression. So we regress our response variable on covariates. And logistic regression is the typical thing that gets used, but you could use other things, probit, complementary, log-log. They'll all tend to give similar predicted probabilities of response. So we've got to have on each unit, whether it responded or not, and on both respondents and non-respondents we've got to have the covariate values. No missing data there to fit this model. So we get all these propensities estimated for both the non-respondents and the respondents, and we form groups at that point. So we sort the respondents and non-respondents from low to high based on the estimated response propensity. And then we divide them into groups. So in the group of low-response propensity units, we put respondents and non-respondents into this group that have the lowest propensity. And then down among the high ones, we've grouped ones that are similar, in the sense of having similar response propensities. The nice thing about this is we've created this one variable that we can use for sorting that is kind of an amalgamation of the different covariates. So this is a nice summary method of doing things. So you divide the file into groups, as I said. 5 is popular for the number of groups, but it doesn't have to be 5. If you've got a big sample, you can certainly create more than that. And it'll make more homogeneous groups, in the sense that the range of propensities is pretty small. So you may recognize this as the same sort of thing that people use for analyzing observational data, where ideally you'd like to have had a designed experiment where you randomized people to be treated or not. But you've just got found data, and you want to estimate kind of a pseudo-assignment probability to treatment or control. The same sort of thing. This idea is just playing off the developments by Rosenbaum and Rubin for observational data analysis. So the general procedure then is use one non-response adjustment within each group. So what it does, by using a single adjustment, it smooths out the effects of any extreme propensities produced by binary regression. So that's usually thought of as being good because we don't necessarily trust the model completely. So if we form five classes, that means we've just got five non-response adjustments. So within a cell, what do we do? We could use the unweighted response rate. We could use the survey weighted response rate. We could use the average of the propensities, or we could use the median propensity. All those will be quite similar if the range in each cell is pretty small. So that's one reason why creating a lot of cells is a good idea if you've got the sample to support it. So later on we'll see some software that will actually do this for you.