Writing quality, knowledge, and comprehension correlates of human and automated essay scoring

Rod Roscoe, Scott A. Crossley, Erica L. Snow, Laura K. Varner, Danielle McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.

Original languageEnglish (US)
Title of host publicationProceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
PublisherThe AAAI Press
Pages393-398
Number of pages6
StatePublished - 2014
Event27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 - Pensacola, United States
Duration: May 21 2014May 23 2014

Other

Other27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
Country/TerritoryUnited States
CityPensacola
Period5/21/145/23/14

ASJC Scopus subject areas

  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Writing quality, knowledge, and comprehension correlates of human and automated essay scoring'. Together they form a unique fingerprint.

Cite this