Examinee Judgments of Changes in Item Difficulty: Implications for Item Review in Computerized Adaptive Testing

Steven L. Wise, Sara J. Finney, Craig K. Enders, Sharon A. Freeman, Donald D. Severance

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

We examined the degree to which providing item review on a computerized adaptive test (CAT) could be used by examinees to artificially inflate their scores. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item review. The results of our first 2 studies suggest that examinees are not highly proficient at discriminating item difficulty - a skill needed for a successful application of the Kingsbury strategy. In the third study, we compared the Kingsbury strategy - which examinees would use only for guessed items - to a generalized strategy used for all sequential item pairs. The Kingsbury strategy yielded a small average score gain, whereas the generalized strategy yielded an average score loss. These results suggest that only the Kingsbury strategy is likely to enable examinees to successfully inflate their test scores.

Original languageEnglish (US)
Pages (from-to)185-198
Number of pages14
JournalApplied Measurement in Education
Volume12
Issue number2
DOIs
StatePublished - 1999
Externally publishedYes

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology

Cite this