Internal usability testing of automated essay feedback in an intelligent writing tutor

Rod D. Roscoe, Laura K. Varner, Zhiqiang Cai, Jennifer L. Weston, Scott A. Crossley, Danielle S. McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations

Abstract

Research on automated essay scoring (AES) indicates that computer-generated essay ratings are comparable to human ratings. However, despite investigations into the accuracy and reliability of AES scores, less attention has been paid to the feedback delivered to the students. This paper presents a method developers can use to quickly evaluate the usability of an automated feedback system prior to testing with students. Using this method, researchers evaluated the feedback provided by the Writing-Pal, an intelligent tutor for writing strategies. Lessons learned and potential for future research are discussed.

Original languageEnglish (US)
Title of host publicationProceedings of the 24th International Florida Artificial Intelligence Research Society, FLAIRS - 24
Pages543-548
Number of pages6
StatePublished - 2011
Externally publishedYes
Event24th International Florida Artificial Intelligence Research Society, FLAIRS - 24 - Palm Beach, FL, United States
Duration: May 18 2011May 20 2011

Publication series

NameProceedings of the 24th International Florida Artificial Intelligence Research Society, FLAIRS - 24

Other

Other24th International Florida Artificial Intelligence Research Society, FLAIRS - 24
Country/TerritoryUnited States
CityPalm Beach, FL
Period5/18/115/20/11

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Internal usability testing of automated essay feedback in an intelligent writing tutor'. Together they form a unique fingerprint.

Cite this