Evaluating Self-Explanations in iSTART: Comparing Word-Based and LSA Algorithms

Danielle S. McNamara, Chutima Boonthum, Irwin Levinstein, Keith Millis

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Interactive strategy training for active reading and thinking (iSTART) is a web-based application that provides young adolescent to college-age students with self-explanation and reading strategy training (McNamara, Levinstein, and Boonthum, 2004). Although untutored self-explanation-that is, explaining the meaning of text to oneself-has been shown to improve text comprehension (Chi, Bassok, Lewis, Reimann, and Glaser, 1989; Chi, de Leeuw, Chiu, and LaVancher, 1994), many readers explain text poorly and gain little from the process. iSTART is designed to improve students’ ability to self-explain by teaching them to use reading strategies such as comprehension monitoring, making bridging inferences, and elaboration. In the final phase of training, students practice using reading strategies by typing self-explanations of sentences from science texts. The computational challenge is to provide appropriate feedback to the students concerning their self-explanations. To do so requires capturing some sense of both the meaning and quality of the self-explanation. LSA is an important component in that process. Indeed, an important contribution of LSA is that it allows researchers to automatically capture meaning in text (see also, E. Kintsch et al., chap. 14 in this volume; Graesser et al., chap. 13 in this volume). Interpreting text is critical for intelligent tutoring systems, such as iSTART, which are designed to interact meaningfully with, and adapt to, the users’ input. One question, however, regards the extent to which LSA enables or enhances the accuracy of self-explanation evaluation in iSTART. Thus, in this chapter, we compare various systems of self-explanation evaluation that differ in terms of whether the algorithms are word-based, incorporate LSA, or use a combination of algorithms. Because we want to increase the number of texts available for practice in iSTART, we sought to develop evaluation systems that required less human preparation of the included texts, so an important characteristic of the systems discussed is the amount of “hand-coding” required. This chapter describes iSTART and our evaluation of these feedback systems.

Original languageEnglish (US)
Title of host publicationHandbook of Latent Semantic Analysis
PublisherTaylor and Francis
Pages227-241
Number of pages15
ISBN (Electronic)9781135603281
ISBN (Print)9780203936399
DOIs
StatePublished - Jan 1 2007
Externally publishedYes

ASJC Scopus subject areas

  • Psychology(all)

Fingerprint

Dive into the research topics of 'Evaluating Self-Explanations in iSTART: Comparing Word-Based and LSA Algorithms'. Together they form a unique fingerprint.

Cite this