Evaluating state-of-the-art Treebank-style parsers for Coh-Metrix and other learning technology environments

Christian F. Hempelmann, Vasile Rus, Arthur C. Graesser, Danielle S. McNamara

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

This paper evaluates four of the most commonly used, freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion, as one example of a learning technology application that requires fast and accurate syntactic parsing. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge, this is the first attempt to evaluate parsers across genres and grade levels for the implementation in learning technology using both gold standard and directed evaluation methods.

Original languageEnglish (US)
Pages (from-to)131-144
Number of pages14
JournalNatural Language Engineering
Volume12
Issue number2
DOIs
StatePublished - Jun 2006
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Evaluating state-of-the-art Treebank-style parsers for Coh-Metrix and other learning technology environments'. Together they form a unique fingerprint.

Cite this