Writing quality, knowledge, and comprehension correlates of human and automated essay scoring

Rod Roscoe, Scott A. Crossley, Erica L. Snow, Laura K. Varner, Danielle McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.

Original languageEnglish (US)
Title of host publicationProceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
PublisherThe AAAI Press
Pages393-398
Number of pages6
StatePublished - 2014
Event27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 - Pensacola, United States
Duration: May 21 2014May 23 2014

Other

Other27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
CountryUnited States
CityPensacola
Period5/21/145/23/14

Fingerprint

Human engineering
Students

ASJC Scopus subject areas

  • Computer Science Applications

Cite this

Roscoe, R., Crossley, S. A., Snow, E. L., Varner, L. K., & McNamara, D. (2014). Writing quality, knowledge, and comprehension correlates of human and automated essay scoring. In Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 (pp. 393-398). The AAAI Press.

Writing quality, knowledge, and comprehension correlates of human and automated essay scoring. / Roscoe, Rod; Crossley, Scott A.; Snow, Erica L.; Varner, Laura K.; McNamara, Danielle.

Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, 2014. p. 393-398.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Roscoe, R, Crossley, SA, Snow, EL, Varner, LK & McNamara, D 2014, Writing quality, knowledge, and comprehension correlates of human and automated essay scoring. in Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, pp. 393-398, 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014, Pensacola, United States, 5/21/14.
Roscoe R, Crossley SA, Snow EL, Varner LK, McNamara D. Writing quality, knowledge, and comprehension correlates of human and automated essay scoring. In Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press. 2014. p. 393-398
Roscoe, Rod ; Crossley, Scott A. ; Snow, Erica L. ; Varner, Laura K. ; McNamara, Danielle. / Writing quality, knowledge, and comprehension correlates of human and automated essay scoring. Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, 2014. pp. 393-398
@inproceedings{a25f337f6f014bb88bea85af05ac8d70,
title = "Writing quality, knowledge, and comprehension correlates of human and automated essay scoring",
abstract = "Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.",
author = "Rod Roscoe and Crossley, {Scott A.} and Snow, {Erica L.} and Varner, {Laura K.} and Danielle McNamara",
year = "2014",
language = "English (US)",
pages = "393--398",
booktitle = "Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014",
publisher = "The AAAI Press",

}

TY - GEN

T1 - Writing quality, knowledge, and comprehension correlates of human and automated essay scoring

AU - Roscoe, Rod

AU - Crossley, Scott A.

AU - Snow, Erica L.

AU - Varner, Laura K.

AU - McNamara, Danielle

PY - 2014

Y1 - 2014

N2 - Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.

AB - Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.

UR - http://www.scopus.com/inward/record.url?scp=84923865610&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84923865610&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84923865610

SP - 393

EP - 398

BT - Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014

PB - The AAAI Press

ER -