Mode System Effects in an Online Panel Study: Comparing a Probability-based Online Panel with two Face-to-Face Reference Surveys
Abstract
One of the methods for evaluating online panels in terms of data quality is comparing the estimates that the panels provide with benchmark sources. For probability-based online panels, high-quality surveys or government statistics can be used as references. If differences among the benchmark and the online panel estimates are found, these can have several causes. First, the question wordings can differ between the sources, which can lead to differences in measurement. Second, the reference and the online panel may not be comparable in terms of sample composition. Finally, since the reference estimates are usually collected face-to-face or by telephone, mode effects might be expected. In this article, we investigate mode system effects, an alternative to mode effects that does not focus solely on measurement differences between the modes, but also incorporates survey design features into the comparison. The data from a probability-based offline-recruited online panel is compared to the data from two face-to-face surveys with almost identical recruitment protocols. In the analysis, the distinction is made between factual and attitudinal questions. We report both effect sizes of the differences and significances. The results show that the online panel differs from face-to-face surveys in both attitudinal and factual measures. However, the reference surveys only differ in attitudinal measures and show no significant differences for factual questions. We attribute this to the instability of attitudes and thus show the importance of triangulation and using two surveys of the same mode for comparison.
Keywords
Full Text:
PDFDOI: https://doi.org/10.12758/mda.2015.001
Refbacks
- There are currently no refbacks.
Copyright (c) 2016 Bella Struminskaya, Edith de Leeuw, Lars Kaczmirek
This work is licensed under a Creative Commons Attribution 4.0 International License.