Abstract
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 |
Publisher | The AAAI Press |
Pages | 393-398 |
Number of pages | 6 |
State | Published - 2014 |
Event | 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 - Pensacola, United States Duration: May 21 2014 → May 23 2014 |
Other
Other | 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 |
---|---|
Country/Territory | United States |
City | Pensacola |
Period | 5/21/14 → 5/23/14 |
ASJC Scopus subject areas
- Computer Science Applications