[BLANK_AUDIO] The core is of course the main questionnaire. And the whole course was about getting a good main questionnaire going. But there's a little bit of structure in the main questionnaire that we do want to talk about, and then, of course, also talk about what comes at the end of the questionnaire. So the first piece to think about is the question order. Questions should be relevent and easy when you field them at the beginning of the questionnaire. You get the respondent interested in participating in the survey, because you mentioned the survey topic and the respondent decided to participate. And you want to start with a relevant question that's easy to answer, even if that is not a central question to your analysis. If you end up not using this question, still good if you have something that respondents can easily answer at the beginning. It is tempting for any novice in questionnaire design to start the questionnaire with the demographic questions. I'm not sure what's the psychology behind that. But without failure so far I've seen every single class project, if students do not know about the order that a questionnaire should be in, they start with demographics. You know, not quite "What's your name?", because those would be confidential information, but "What's your age?", and you know, "What's your gender?" or "...your major?", if it's a school project. So don't do that. Start with the relevant and easy question. Then the questions should be grouped into modules, and these modules should so have a logical sequence for the respondents. In a way, they're ordered from most salient to least salient with respect to survey topic. However, there are diverges from that because sensitive questions should be moved towards the end. So if your key research topic is on sensitive question, there is a tension here that you have to keep in mind. You have also to make sure that the order of the question does not produce any bias, and we did discuss context effects in questions throughout the course. So make sure you review those segments if you don't remember these pieces. When you structure your questionnaire, make sure that you always number your questions. It's okay to number them sequentially, it's okay to number them within a module, but something that makes it easy to look at the questionnaire, look at the data set, move between one and the other and give the analyst also an idea of what is the logical order of the questions that you have. Then you need numbers for your response options. It is interesting how different survey companies have different numbers as default. So, you know, it was for a long time popular to use to 98, 97, 99, 9999 as codes for "don't know" or refusal. I personally prefer, if the real answer categories do not have numbers below zero, to use negative numbers for refused and "don't know" or "does not apply." The reason for that is, that later on, when you analyze your data, you can assign missing value codes for a string of variables and you don't have to look, what is the range, the numeric range, of the possible answers. It might be that you have some questions in there, for example age, or weight, or, I don't know, days since something happened, that could hit these numbers, 98, 999 and so on. And so then you have to be much more careful in how you code and assign missing value codes. It is also helpful for a coder, so if the question is not filled in automatically, like it would be in web surveys, but if a coder assigns these questions after looking at the questionnaire, that you are somewhat consistent in how "yes" and "no" is coded, you know, just so that, whoever puts in these numbers doesn't accidentally switch these numbers around. So that is a thing to keep in mind, you know, when you design the questionnaire and when you think of numbers for response options that you match that up into an easily codeable questionaire. There are transitions that you have to keep paying attention to, and there's a lot of text on this slide, but it's there for you to see some good transitions or some transitions that are in place. Often they are as easy as, "Now I'd like to ask you...," and then there's a little bit about what is going to happen now. If there are additional items that respondents use, like a booklet or a show card or something, then attention can be drawn to those, and people move on to a different topic. But in general, it's good that you don't appear to be jumping from topic to topic and back. Now sometimes people over do it with these transitions. And it is surprising how much tolerance respondents have. Too much text is always bad. And in particular, if you have a self- administered survey, you probably need less transitions then you have in an interviewer administered, just because interviewer and respondent have somewhat of a conversation, and in a conversation you wouldn't just jump around either. Speaking of jumping around. So sometimes you might jump around in a questionnaire, right, because you have a topic that doesn't apply to a respondent, and then we move on. We talked about these filter questions. Now, it is very important that you never leave it up to the interviewer and respondent to decide where to go, right? You want to have clear branching instructions, otherwise you will have two types of error. A so-called error of commission, which is an error where the respondent fails to branch when he or she is instructed to do so. And an error of omission, which is when the respondent skips, even so they shouldn't. The first one is frustrating for the respondent. Okay? They get confused, they get questions that don't apply to them. And it's also annoying for an analyst, because they don't necessarily know, was the filter question answered correctly or, whatever the respondent might have ended up putting into the follow up question, do those really belong to the respondent answers. The errors of omission, they can't be repaired even with a judgement call because the respondent skipped over a section or a segment of the questionnaire that they shouldn't have skipped. The result is missing data, and the only thing that you might have left is imputing the values that should be there. In general, errors of commission are more common that errors of omission. At least, that's the result of a research study by Claire Redline and Don Dillman published in 2002. Here's a snippet from that study. You see the control condition and then two other conditions. One in which they tried to prevent this error, and one in which they modulated how the skipping instruction can be enhanced, you know, with this additional arrow here that leads the respondent to the next question. And here are the results. You see, in the control condition, you have about 20% of commissions and 1.6% of omissions. So this one here is a typical format for a self administered questionnaire. In these other two conditions, they were able to significantly lower the commission error, albeit at the expense of some omission. So again a trade off decision just like we saw before. So finally the, after the main part of the questionnaire, you have your demographic questions. There's often the transitional introduction to those. "Lastly, some questions for classification purpose," sometimes it's called "statistical purpose" only. You can look at all the questionnaires we pointed you to at the beginning of this class, to see what kind of transitions would work for you here. Keep in mind, you do need the demographic questions, and they should be in line with your weighting plan and your analysis plan. So this thinking ahead is really necessary, and you always should check that you do have the right questions that match up with whatever external source you want to use for weighting. So, if you want to use your demographic questions for poststratification for some data that's connected by the U.S. Census Bureau, your demographic questions better translate back to the those Census questions, otherwise you will be in trouble. That also means that you should choose your question wording based on those questions that you will be using in comparison for weighting, and of course so that they match your substantive interests, which reflects itself in the analysis plan. And then, at the end, you should thank your respondent. Make sure that you never forget that. Often, it's also good to leave a little bit of open space for additional feedback, any other things that they want to tell you. Often at the end you'll see contact information for the project. Again the local IRB information should either be there or in the front. And additional contact information from the respondent is sometimes necessary, if you plan to follow up for a quality control check, or if you want to mail and send it to them that you promised, a conditional incentive to participate in the survey for example. Or if it's part of a panel study and you want to track them through them. Remember to ask for cellphones. Maybe ask them for cellphone numbers from family members and the like. It can get difficult to find respondents a year or two years after your study. Now, in the online discussion board, you submitted some questions and one of them was pointing at this tension between salient question being in the front, but that objectionable or sensitive question should be in the back, you know. To balance these criteria that is a challenging piece, and one way out can be mode decisions, and we'll come to that in one of the further segments. Similarly, mode in which you field the survey will also affect how you design certain question elements, and we'll come back to that in one of the next segments. So, more choice and implication for layout will be coming up.