Reliability in Coding Open-Ended Data: Lessons Learned from HIV Behavioral Research

Daniel J. Hruschka, Deborah Schwartz, Daphne Cobb St.john, Erin Picone-Decaro, Richard A. Jenkins, James W. Carey

Research output: Contribution to journalArticle

341 Scopus citations

Abstract

Analysis of text from open-ended interviews has become an important research tool in numerous fields, including business, education, and health research. Coding is an essential part of such analysis, but questions of quality control in the coding process have generally received little attention. This article examines the text coding process applied to three HIV-related studies conducted with the Centers for Disease Control and Prevention considering populations in the United States and Zimbabwe. Based on experience coding data from these studies, we conclude that (1) a team of coders will initially produce very different codings, but (2) it is possible, through a process of codebook revision and recoding, to establish strong levels of intercoder reliability (e.g., most codes with kappa 0.8). Furthermore, steps can be taken to improve initially poor intercoder reliability and to reduce the number of iterations required to generate stronger intercoder reliability.

Original languageEnglish (US)
Pages (from-to)307-331
Number of pages25
JournalField Methods
Volume16
Issue number3
DOIs
StatePublished - Aug 2004
Externally publishedYes

    Fingerprint

Keywords

  • data
  • intercoder agreement
  • interrater agreement
  • open-ended
  • qualitative
  • reliability

ASJC Scopus subject areas

  • Anthropology

Cite this

Hruschka, D. J., Schwartz, D., St.john, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in Coding Open-Ended Data: Lessons Learned from HIV Behavioral Research. Field Methods, 16(3), 307-331. https://doi.org/10.1177/1525822X04266540