Assessing cognitively complex strategy use in an untrained domain

George T. Jackson, Rebekah H. Guess, Danielle McNamara

Research output: Contribution to journalArticle

30 Citations (Scopus)

Abstract

Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for Interactive Strategy Training for Active Reading and Thinking (iSTART), an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART had not been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students' self-explanations.

Original languageEnglish (US)
Pages (from-to)127-137
Number of pages11
JournalTopics in Cognitive Science
Volume2
Issue number1
DOIs
StatePublished - Jan 2010
Externally publishedYes

Fingerprint

Reading
Students
Research Personnel
Intelligent systems
system change
Feedback
performance
Technology
student
rating
Thinking
teacher

Keywords

  • Automatic assessment
  • Empirical validation
  • Intelligent tutoring systems
  • Reading strategies

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience
  • Artificial Intelligence
  • Linguistics and Language
  • Human-Computer Interaction

Cite this

Assessing cognitively complex strategy use in an untrained domain. / Jackson, George T.; Guess, Rebekah H.; McNamara, Danielle.

In: Topics in Cognitive Science, Vol. 2, No. 1, 01.2010, p. 127-137.

Research output: Contribution to journalArticle

Jackson, George T. ; Guess, Rebekah H. ; McNamara, Danielle. / Assessing cognitively complex strategy use in an untrained domain. In: Topics in Cognitive Science. 2010 ; Vol. 2, No. 1. pp. 127-137.
@article{3091f0e91e0141aabad5cb95b5f31164,
title = "Assessing cognitively complex strategy use in an untrained domain",
abstract = "Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for Interactive Strategy Training for Active Reading and Thinking (iSTART), an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART had not been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students' self-explanations.",
keywords = "Automatic assessment, Empirical validation, Intelligent tutoring systems, Reading strategies",
author = "Jackson, {George T.} and Guess, {Rebekah H.} and Danielle McNamara",
year = "2010",
month = "1",
doi = "10.1111/j.1756-8765.2009.01068.x",
language = "English (US)",
volume = "2",
pages = "127--137",
journal = "Topics in Cognitive Science",
issn = "1756-8757",
publisher = "Wiley-Blackwell",
number = "1",

}

TY - JOUR

T1 - Assessing cognitively complex strategy use in an untrained domain

AU - Jackson, George T.

AU - Guess, Rebekah H.

AU - McNamara, Danielle

PY - 2010/1

Y1 - 2010/1

N2 - Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for Interactive Strategy Training for Active Reading and Thinking (iSTART), an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART had not been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students' self-explanations.

AB - Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for Interactive Strategy Training for Active Reading and Thinking (iSTART), an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART had not been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students' self-explanations.

KW - Automatic assessment

KW - Empirical validation

KW - Intelligent tutoring systems

KW - Reading strategies

UR - http://www.scopus.com/inward/record.url?scp=77958116478&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77958116478&partnerID=8YFLogxK

U2 - 10.1111/j.1756-8765.2009.01068.x

DO - 10.1111/j.1756-8765.2009.01068.x

M3 - Article

C2 - 25163626

AN - SCOPUS:77958116478

VL - 2

SP - 127

EP - 137

JO - Topics in Cognitive Science

JF - Topics in Cognitive Science

SN - 1756-8757

IS - 1

ER -