Comparing and Improving the Accuracy of Nonprobability Samples: Profiling Australian Surveys

Sebastian Kocar, Bernard Baffour


There has been a great deal of debate in the survey research community about the accuracy of nonprobability sample surveys. This work aims to provide empirical evidence about the accuracy of nonprobability samples and to investigate the performance of a range of post-survey adjustment approaches (calibration or matching methods) to reduce bias, and lead to enhanced inference. We use data from five nonprobability online panel surveys and com­pare their accuracy (pre- and post-survey adjustment) to four probability surveys, including data from a probability online panel. This article adds value to the existing research by assessing methods for causal inference not previously applied for this purpose and dem­onstrates the value of various types of covariates in mitigation of bias in nonprobability online panels. Investigating different post-survey adjustment scenarios based on the avail­ability of auxiliary data, we demonstrated how carefully designed post-survey adjustment can reduce some bias in survey research using nonprobability samples. The results show that the quality of post-survey adjustments is, first and foremost, dependent on the avail­ability of relevant high-quality covariates which come from a representative large-scale probability-based survey data and match those in nonprobability data. Second, we found little difference in the efficiency of different post-survey adjustment methods, and inconsis­tent evidence on the suitability of ‘webographics’ and other internet-associated covariates for mitigating bias in nonprobability samples.


nonprobability sampling, volunteer online panels, post-survey adjustment, calibration, matching methods, benchmarking

Full Text:




  • There are currently no refbacks.

Copyright (c) 2023 Sebastian Kocar, Bernard Baffour

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.