Handbook of Web Surveys. Jelke Bethlehem
show how to use, at the monitoring data collection step, adaptive design to increase the response rate in a web survey and how different error types are interrelated, Bianchi and Biffignandi (2014) applied experimental responsive design strategies, in retrospect, to the recruitment of the mixed‐mode panel. Especially targeting the units contributing the most to nonresponse bias during data collection was useful. The identification of such units took place through indicators representing proxy measures for nonresponse bias. In their study, the authors adopted three strategies, and the results show that the method proves promising in reducing nonresponse bias. When evaluated in the TSE framework, i.e., considering how different errors relates to the adopted responsive strategies, the results are not uniform across variables. In general, there is a reduction of total errors.
3.4 Summary
A flowchart for the web/mobile web survey process of a probability‐based survey is proposed and discussed. The flowchart will be useful for practitioners and researchers. Practitioners have a guide to follow when undertaking the survey in an efficient way, without forgetting or overlooking important decisional steps. Not considering the steps in the flowchart could compromise the survey's quality and increase the amount of errors. Because a detailed description of the most important flowchart steps is in the chapters of this book, surveyors have the opportunity to gain a deeper insight into different issues ad techniques.
When analyzing their empirical results and evaluating errors and the risk of errors, researchers can identify the steps or sub‐steps to which the results are related and determine how the decisions on one step (or sub‐step) could affect the results at other steps, thus improving the quality of the survey process.
KEY TERMS
Metadata:Metadata is “data that provides information about other data.” In short, it's data about data.Mixed‐mode survey:A survey in which various modes of data collection are combined. Modes can be used concurrently (different groups are approached by different modes) or sequentially (nonrespondents of a mode are re‐approached by a different mode).Paradata:Paradata of a web survey are data about the process by which the data were collected.Probability‐based panel: A panel for which members are recruited by means of probability sampling.
EXERCISES
1 Exercise 3.1 Selecting the survey mode for a probability‐based survey requires knowing:The sampling frame list of the web population.The sampling frame list of the whole population (both web and non‐web).Only the sampling frame list of the non‐Internet population.Only the phone number of the Internet population.
2 Exercise 3.2 When a bias error occurs?When designing the survey.When choosing the sampling technique.When drawing the sample.When estimating the model.
3 Exercise 3.3 Probability‐based web survey can take place:When the target frame list is available.Whether people invitation to participate in a survey appears when accessing a website.When an e‐mail list of a few people of the target population is available.When the list of e‐mail addresses is available.
4 Exercise 3.4 The flowchart is describing:The sampling methods.Step of actions and decisions in a web survey.When an e‐mail list of a few people of the target population is available.When the list of e‐mail addresses is available.
5 Exercise 3.5 Total survey error approach is considering:Only sampling errors.The overall quality of the survey steps.Only the measurement error.A non‐probability‐based survey.
6 Exercise 3.6 The selection of an inadequate mode is affecting:Only the coverage of the sampling frame.Only the response rate.The overall quality of the survey.The response rate and the sampling frame.
REFERENCES
1 Bethlehem, J. & Biffignandi, S. (2012), Handbook of Web Surveys, 1st edition. Wiley, Hoboken, NJ.
2 Bianchi, A. & Biffignandi, S. (2014), Responsive Design for Economic Data in Mixed‐Mode Panels. In: Mecatti, F., Conti, P. L., & Ranalli, M. G. (eds.), Contributions to Sampling Statistics. Springer, Berlin, pp. 85–102.
3 Biemer, P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N., & West, B. T. (eds.). (2017), Big Data in Practice. Wiley, Hoboken, NJ.
4 Callegaro, M. (2013), Paradata in Web Surveys. In: Kreuter, F. (ed.), Improving Surveys with Paradata: Analytic Uses of Process Information. Wiley, Hoboken, NJ, pp. 261–280.
5 Couper, M. & Mavletova, A. (2014), Mobile Web Surveys: Scrolling versus Paging; SMS versus e‐mail Invitations. Journal of Survey Statistics Methodology, 2, pp. 498–518.
6 Dillman, D., Smyth, J., & Christian, L. M. (2014), Internet, Mail and Mixed Mode Surveys. The Tailored Design Method. Wiley, Hoboken, NJ.
7 Groves, R. M. (1989), Survey Errors and Survey Costs. Wiley, New York.
8 Heerwegh, D. (2011), Internet Survey Paradata. In: Das, M., Ester, P., & Kaczmirek, L. (eds.), Social and Behavioral Research and the Internet: Advances in Applied Methods and Research Strategies. Taylor and Francis, Oxford.
9 Jäckle, A., Lynn, P., & Burton, J. (2015), Going Online with a Face‐to‐Face Household Panel: Effects of a Mixed Mode Design on Item and Unit Non‐Response. Survey Research Methods, 9, 1, pp. 57–70.
10 Kreuter, F. (2013), Improving Surveys with Paradata: Analytic Uses of Process Information. Wiley, New Jersey.
11 Olson, K. & Parkhurst, B. (2013), Collecting Paradata for Measurement Error Evaluation. In: Kreuter, F. (ed.), Improving Surveys with Paradata: Analytic Uses of Process Information. Wiley, Hoboken, NJ, pp. 73–95.
12 Peterson, G., Griffin, J., La France, J., & Li, J. (2017), Smartphone Participation in Web Surveys. In: Biemer, P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L. E., Tucker, N., & West, B. T. (eds.), Total Survey Error in Practice. Wiley, Hoboken, NJ.
13 Peytchev, A. & Hill, C. A. (2010), Experiments in Mobile Web Survey Design: Similarities to Other Modes and Unique Considerations. Social Science Computer Review, 28, pp. 319–333.
14 Platek, R. & Särndal, C. (2001), Can a Statistician Deliver. Journal of Official Statistics, 17, 1, pp. 1–20.
15 The American Statistical Association for Public Opinion Research (AAPOR). (2016), Standardized Definitions, Final Disposition of Case Codes and Outcome Rates for Surveys, 9th edition. AAPOR.
16 Tourangeau, R., Conrad, F., & Couper, M. (2013), The Science of Web Surveys. Oxford University Press, New York.
17 Weisberg, H. (2005), The Total Survey Error Approach: A Guide to New Science of Survey Research. Wiley, Hoboken, NJ.
Notes
1 1 Fanney Thorsdottir and Silvia Biffignandi presented and discussed their flowchart at a WG3 task force (TF10) meeting organized by the Webdatanet COST Action (IS1004) while chairing and co‐chairing WG3 and leading TF10 on General Framework for Error Categorization in Internet Surveys. A final draft of the flowchart was presented at the Webdatanet conference (Salamanca, May 26–28, 2015). An extension for the mode selection has been presented at the Total Survey Error Conference (Baltimore, September 2015).
2 2 The flowchart is useful not only for specialized survey research organizations but also for smaller organizations or individual researchers. Any surveyor, even for simple and rather small surveys, no matter if he is in favor for more or less sophisticated methodological solutions, should refer to it. He should go across each step and decision to run a good survey. Large statistical organizations (like National Statistical Institutes) follow a more complex survey process because of the different