Automated Writing Evaluation and Feedback: Multiple Metrics of Efficacy

Joshua Wilson, Rod Roscoe

Research output: Contribution to journalArticlepeer-review

63 Scopus citations

Abstract

The present study extended research on the effectiveness of automated writing evaluation (AWE) systems. Sixth graders were randomly assigned by classroom to an AWE condition that used Project Essay Grade Writing (n = 56) or a word-processing condition that used Google Docs (n = 58). Effectiveness was evaluated using multiple metrics: writing self-efficacy, holistic writing quality, performance on a state English language arts test, and teachers' perceptions of AWE's social validity. Path analyses showed that after controlling for pretest measures, composing condition had no effect on holistic writing quality, but students in the AWE condition had more positive writing self-efficacy and better performance on the state English language arts test. Posttest writing self-efficacy partially mediated the effect of composing condition on state test performance. Teachers reported positive perceptions of AWE's social validity. Results emphasize the importance of using multiple metrics and considering both contextual factors and AWE implementation methods when evaluating AWE effectiveness.

Original languageEnglish (US)
JournalJournal of Educational Computing Research
DOIs
StatePublished - Jan 1 2019

Keywords

  • automated feedback
  • automated writing evaluation
  • interactive learning environments
  • writing
  • writing self-efficacy

ASJC Scopus subject areas

  • Education
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Automated Writing Evaluation and Feedback: Multiple Metrics of Efficacy'. Together they form a unique fingerprint.

Cite this