Coding Text Answers to Open-ended Questions: Human Coders and Statistical Learning Algorithms Make Similar Mistakes

Zhoushanyue He, Matthias Schonlau

Abstract


Text answers to open-ended questions are often manually coded into one of several predefined categories or classes. More recently, researchers have begun to employ statistical models to automatically classify such text responses. It is unclear whether such automated coders and human coders find the same type of observations difficult to code or whether humans and models might be able to compensate for each other’s weaknesses. We analyze correlations between estimated error probabilities of human and automated coders and find: 1) Statistical models have higher error rates than human coders 2) Automated coders (models) and human coders tend to make similar coding mistakes. Specifically, the correlation between the estimated coding error of a statistical model and that of a human is comparable to that of two humans. 3) Two very different statistical models give highly correlated estimated coding errors. Therefore, a) the choice of statistical model does not matter, and b) having a second automated coder would be redundant.

Keywords


open-ended question, manual coding, automatic coding, text classification, text answer

Full Text:

PDF


DOI: https://doi.org/10.12758/mda.2020.10

Refbacks

  • There are currently no refbacks.


Copyright (c) 2020 Zhoushanyue He, Matthias Schonlau

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.