Reliability and Model Fit

Leanne M. Stanley, Michael C. Edwards

Research output: Contribution to journalArticlepeer-review

27 Scopus citations

Abstract

The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the reliability of scores and the fit of a corresponding measurement model to be either acceptable or unacceptable for a given situation, but these are not the only possible outcomes. This article focuses on situations in which model fit is deemed acceptable, but reliability is not. Data were simulated based on the item characteristics of the PROMIS (Patient Reported Outcomes Measurement Information System) anxiety item bank and analyzed using methods from classical test theory, factor analysis, and item response theory. Analytic techniques from different psychometric traditions were used to illustrate that reliability and model fit are distinct, and that disagreement among indices of reliability and model fit may provide important information bearing on a particular validity argument, independent of the data analytic techniques chosen for a particular research application. We conclude by discussing the important information gleaned from the assessment of reliability and model fit.

Original languageEnglish (US)
Pages (from-to)976-985
Number of pages10
JournalEducational and Psychological Measurement
Volume76
Issue number6
DOIs
StatePublished - Dec 1 2016
Externally publishedYes

Keywords

  • factor analysis
  • item response theory
  • model fit
  • reliability

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology
  • Applied Psychology
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Reliability and Model Fit'. Together they form a unique fingerprint.

Cite this