Scoring summaries using recurrent neural networks

Stefan Ruseti, Mihai Dascalu, Amy Johnson, Danielle McNamara, Renu Balyan, Kathryn S. McCarthy, Stefan Trausan-Matu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a few shallow systems available, most of which rely on basic word/n-gram overlaps. In this paper, we introduce a hybrid model that uses state-of-the-art recurrent neural networks and textual complexity indices to score summaries. Our best model achieves over 55% accuracy for a 3-way classification that measures the degree to which the main ideas from the original text are covered by the summary. Our experiments show that the writing style, represented by the textual complexity indices, together with the semantic content grasped within the summary are the best predictors, when combined. To the best of our knowledge, this is the first work of its kind that uses RNNs for scoring and evaluating summaries.

Original languageEnglish (US)
Title of host publicationIntelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings
PublisherSpringer Verlag
Pages191-201
Number of pages11
ISBN (Print)9783319914633
DOIs
StatePublished - Jan 1 2018
Event14th International Conference on Intelligent Tutoring Systems, ITS 2018 - Montreal, Canada
Duration: Jun 11 2018Jun 15 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10858 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other14th International Conference on Intelligent Tutoring Systems, ITS 2018
CountryCanada
CityMontreal
Period6/11/186/15/18

Fingerprint

Recurrent neural networks
Summarization
Recurrent Neural Networks
Scoring
N-gram
Hybrid Model
Overlap
Predictors
Semantics
Evaluation
Experiment
Experiments
Text
Model
Knowledge
Style
Strategy
Learning

Keywords

  • Automated summary evaluation
  • Recurrent neural network
  • Semantic models
  • Word embeddings

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Ruseti, S., Dascalu, M., Johnson, A., McNamara, D., Balyan, R., McCarthy, K. S., & Trausan-Matu, S. (2018). Scoring summaries using recurrent neural networks. In Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings (pp. 191-201). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10858 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-319-91464-0_19

Scoring summaries using recurrent neural networks. / Ruseti, Stefan; Dascalu, Mihai; Johnson, Amy; McNamara, Danielle; Balyan, Renu; McCarthy, Kathryn S.; Trausan-Matu, Stefan.

Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings. Springer Verlag, 2018. p. 191-201 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10858 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ruseti, S, Dascalu, M, Johnson, A, McNamara, D, Balyan, R, McCarthy, KS & Trausan-Matu, S 2018, Scoring summaries using recurrent neural networks. in Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10858 LNCS, Springer Verlag, pp. 191-201, 14th International Conference on Intelligent Tutoring Systems, ITS 2018, Montreal, Canada, 6/11/18. https://doi.org/10.1007/978-3-319-91464-0_19
Ruseti S, Dascalu M, Johnson A, McNamara D, Balyan R, McCarthy KS et al. Scoring summaries using recurrent neural networks. In Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings. Springer Verlag. 2018. p. 191-201. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-91464-0_19
Ruseti, Stefan ; Dascalu, Mihai ; Johnson, Amy ; McNamara, Danielle ; Balyan, Renu ; McCarthy, Kathryn S. ; Trausan-Matu, Stefan. / Scoring summaries using recurrent neural networks. Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings. Springer Verlag, 2018. pp. 191-201 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{585df1a9c3e840529b016625f10ea948,
title = "Scoring summaries using recurrent neural networks",
abstract = "Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a few shallow systems available, most of which rely on basic word/n-gram overlaps. In this paper, we introduce a hybrid model that uses state-of-the-art recurrent neural networks and textual complexity indices to score summaries. Our best model achieves over 55{\%} accuracy for a 3-way classification that measures the degree to which the main ideas from the original text are covered by the summary. Our experiments show that the writing style, represented by the textual complexity indices, together with the semantic content grasped within the summary are the best predictors, when combined. To the best of our knowledge, this is the first work of its kind that uses RNNs for scoring and evaluating summaries.",
keywords = "Automated summary evaluation, Recurrent neural network, Semantic models, Word embeddings",
author = "Stefan Ruseti and Mihai Dascalu and Amy Johnson and Danielle McNamara and Renu Balyan and McCarthy, {Kathryn S.} and Stefan Trausan-Matu",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-319-91464-0_19",
language = "English (US)",
isbn = "9783319914633",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "191--201",
booktitle = "Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings",

}

TY - GEN

T1 - Scoring summaries using recurrent neural networks

AU - Ruseti, Stefan

AU - Dascalu, Mihai

AU - Johnson, Amy

AU - McNamara, Danielle

AU - Balyan, Renu

AU - McCarthy, Kathryn S.

AU - Trausan-Matu, Stefan

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a few shallow systems available, most of which rely on basic word/n-gram overlaps. In this paper, we introduce a hybrid model that uses state-of-the-art recurrent neural networks and textual complexity indices to score summaries. Our best model achieves over 55% accuracy for a 3-way classification that measures the degree to which the main ideas from the original text are covered by the summary. Our experiments show that the writing style, represented by the textual complexity indices, together with the semantic content grasped within the summary are the best predictors, when combined. To the best of our knowledge, this is the first work of its kind that uses RNNs for scoring and evaluating summaries.

AB - Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a few shallow systems available, most of which rely on basic word/n-gram overlaps. In this paper, we introduce a hybrid model that uses state-of-the-art recurrent neural networks and textual complexity indices to score summaries. Our best model achieves over 55% accuracy for a 3-way classification that measures the degree to which the main ideas from the original text are covered by the summary. Our experiments show that the writing style, represented by the textual complexity indices, together with the semantic content grasped within the summary are the best predictors, when combined. To the best of our knowledge, this is the first work of its kind that uses RNNs for scoring and evaluating summaries.

KW - Automated summary evaluation

KW - Recurrent neural network

KW - Semantic models

KW - Word embeddings

UR - http://www.scopus.com/inward/record.url?scp=85048346442&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048346442&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-91464-0_19

DO - 10.1007/978-3-319-91464-0_19

M3 - Conference contribution

SN - 9783319914633

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 191

EP - 201

BT - Intelligent Tutoring Systems - 14th International Conference, ITS 2018, Proceedings

PB - Springer Verlag

ER -