Developing component scores from natural language processing tools to assess human ratings of essay quality

Scott A. Crossley, Danielle McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative and formative feedback in an automatic writing evaluation systems such as those found in Writing-Pal.

Original languageEnglish (US)
Title of host publicationProceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
PublisherThe AAAI Press
Pages381-386
Number of pages6
Publication statusPublished - 2014
Event27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 - Pensacola, United States
Duration: May 21 2014May 23 2014

Other

Other27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
CountryUnited States
CityPensacola
Period5/21/145/23/14

    Fingerprint

ASJC Scopus subject areas

  • Computer Science Applications

Cite this

Crossley, S. A., & McNamara, D. (2014). Developing component scores from natural language processing tools to assess human ratings of essay quality. In Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 (pp. 381-386). The AAAI Press.