Developing component scores from natural language processing tools to assess human ratings of essay quality

Scott A. Crossley, Danielle McNamara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative and formative feedback in an automatic writing evaluation systems such as those found in Writing-Pal.

Original languageEnglish (US)
Title of host publicationProceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
PublisherThe AAAI Press
Pages381-386
Number of pages6
StatePublished - 2014
Event27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 - Pensacola, United States
Duration: May 21 2014May 23 2014

Other

Other27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014
CountryUnited States
CityPensacola
Period5/21/145/23/14

Fingerprint

Principal component analysis
Feedback
Processing

ASJC Scopus subject areas

  • Computer Science Applications

Cite this

Crossley, S. A., & McNamara, D. (2014). Developing component scores from natural language processing tools to assess human ratings of essay quality. In Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014 (pp. 381-386). The AAAI Press.

Developing component scores from natural language processing tools to assess human ratings of essay quality. / Crossley, Scott A.; McNamara, Danielle.

Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, 2014. p. 381-386.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Crossley, SA & McNamara, D 2014, Developing component scores from natural language processing tools to assess human ratings of essay quality. in Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, pp. 381-386, 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014, Pensacola, United States, 5/21/14.
Crossley SA, McNamara D. Developing component scores from natural language processing tools to assess human ratings of essay quality. In Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press. 2014. p. 381-386
Crossley, Scott A. ; McNamara, Danielle. / Developing component scores from natural language processing tools to assess human ratings of essay quality. Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. The AAAI Press, 2014. pp. 381-386
@inproceedings{ae8fc78519af440191e678e4f107ac46,
title = "Developing component scores from natural language processing tools to assess human ratings of essay quality",
abstract = "This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative and formative feedback in an automatic writing evaluation systems such as those found in Writing-Pal.",
author = "Crossley, {Scott A.} and Danielle McNamara",
year = "2014",
language = "English (US)",
pages = "381--386",
booktitle = "Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014",
publisher = "The AAAI Press",

}

TY - GEN

T1 - Developing component scores from natural language processing tools to assess human ratings of essay quality

AU - Crossley, Scott A.

AU - McNamara, Danielle

PY - 2014

Y1 - 2014

N2 - This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative and formative feedback in an automatic writing evaluation systems such as those found in Writing-Pal.

AB - This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative and formative feedback in an automatic writing evaluation systems such as those found in Writing-Pal.

UR - http://www.scopus.com/inward/record.url?scp=84923884871&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84923884871&partnerID=8YFLogxK

M3 - Conference contribution

SP - 381

EP - 386

BT - Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014

PB - The AAAI Press

ER -