Evaluative misalignment of 10th-grade student and teacher criteria for essay quality

An automated textual analysis

Laura K. Varner, Rod Roscoe, Danielle McNamara

Research output: Contribution to journalArticle

16 Citations (Scopus)

Abstract

Writing is a necessary skill for success in the classroom and the workplace; yet, many students are failing to develop sufficient skills in this area. One potential problem may stem from a misalignment between students' and teachers' criteria for quality writing. According to the evaluative misalignment hypothesis, students assess their own writing using a different set of criteria from their teachers. In this study, the authors utilize automated textual analyses to examine potential misalignments between students' and teachers' evaluation criteria for writing quality. Specifically, the computational tools Coh-Metrix and Linguistic Inquiry and Word Count (LIWC) are used to examine the relationship between linguistic features and student and teacher ratings of students' prompt-based essays. The study included 126 students who wrote timed, SAT-style essays and assessed their own writing on a scale of 1-6. Teachers also evaluated the essays using the SAT rubric on a scale of 1-6. The results yielded empirical evidence for student-teacher misalignment and advanced our understanding of the nature of students' misalignments. Specifically, teachers were attuned to the linguistic features of the essays at both surface and deep levels of text, whereas students' ratings were related to fewer overall textual features and most closely associated with surface-level features.

Original languageEnglish (US)
Pages (from-to)35-59
Number of pages25
JournalJournal of Writing Research
Volume5
Issue number1
StatePublished - 2013

Fingerprint

teacher
student
linguistics
Textual Analysis
teacher rating
student teacher
workplace
rating
classroom
evaluation
evidence
Rating
Satisfiability
Linguistic Features

Keywords

  • Computational linguistics
  • Self-assessment
  • Teacher essay evaluation
  • Textual analysis
  • Writing assessment

ASJC Scopus subject areas

  • Literature and Literary Theory
  • Education
  • Linguistics and Language
  • Language and Linguistics

Cite this

@article{9662982ab30a46a49a32564f7ffb98f7,
title = "Evaluative misalignment of 10th-grade student and teacher criteria for essay quality: An automated textual analysis",
abstract = "Writing is a necessary skill for success in the classroom and the workplace; yet, many students are failing to develop sufficient skills in this area. One potential problem may stem from a misalignment between students' and teachers' criteria for quality writing. According to the evaluative misalignment hypothesis, students assess their own writing using a different set of criteria from their teachers. In this study, the authors utilize automated textual analyses to examine potential misalignments between students' and teachers' evaluation criteria for writing quality. Specifically, the computational tools Coh-Metrix and Linguistic Inquiry and Word Count (LIWC) are used to examine the relationship between linguistic features and student and teacher ratings of students' prompt-based essays. The study included 126 students who wrote timed, SAT-style essays and assessed their own writing on a scale of 1-6. Teachers also evaluated the essays using the SAT rubric on a scale of 1-6. The results yielded empirical evidence for student-teacher misalignment and advanced our understanding of the nature of students' misalignments. Specifically, teachers were attuned to the linguistic features of the essays at both surface and deep levels of text, whereas students' ratings were related to fewer overall textual features and most closely associated with surface-level features.",
keywords = "Computational linguistics, Self-assessment, Teacher essay evaluation, Textual analysis, Writing assessment",
author = "Varner, {Laura K.} and Rod Roscoe and Danielle McNamara",
year = "2013",
language = "English (US)",
volume = "5",
pages = "35--59",
journal = "Journal of Writing Research",
issn = "2030-1006",
publisher = "University of Antwerp",
number = "1",

}

TY - JOUR

T1 - Evaluative misalignment of 10th-grade student and teacher criteria for essay quality

T2 - An automated textual analysis

AU - Varner, Laura K.

AU - Roscoe, Rod

AU - McNamara, Danielle

PY - 2013

Y1 - 2013

N2 - Writing is a necessary skill for success in the classroom and the workplace; yet, many students are failing to develop sufficient skills in this area. One potential problem may stem from a misalignment between students' and teachers' criteria for quality writing. According to the evaluative misalignment hypothesis, students assess their own writing using a different set of criteria from their teachers. In this study, the authors utilize automated textual analyses to examine potential misalignments between students' and teachers' evaluation criteria for writing quality. Specifically, the computational tools Coh-Metrix and Linguistic Inquiry and Word Count (LIWC) are used to examine the relationship between linguistic features and student and teacher ratings of students' prompt-based essays. The study included 126 students who wrote timed, SAT-style essays and assessed their own writing on a scale of 1-6. Teachers also evaluated the essays using the SAT rubric on a scale of 1-6. The results yielded empirical evidence for student-teacher misalignment and advanced our understanding of the nature of students' misalignments. Specifically, teachers were attuned to the linguistic features of the essays at both surface and deep levels of text, whereas students' ratings were related to fewer overall textual features and most closely associated with surface-level features.

AB - Writing is a necessary skill for success in the classroom and the workplace; yet, many students are failing to develop sufficient skills in this area. One potential problem may stem from a misalignment between students' and teachers' criteria for quality writing. According to the evaluative misalignment hypothesis, students assess their own writing using a different set of criteria from their teachers. In this study, the authors utilize automated textual analyses to examine potential misalignments between students' and teachers' evaluation criteria for writing quality. Specifically, the computational tools Coh-Metrix and Linguistic Inquiry and Word Count (LIWC) are used to examine the relationship between linguistic features and student and teacher ratings of students' prompt-based essays. The study included 126 students who wrote timed, SAT-style essays and assessed their own writing on a scale of 1-6. Teachers also evaluated the essays using the SAT rubric on a scale of 1-6. The results yielded empirical evidence for student-teacher misalignment and advanced our understanding of the nature of students' misalignments. Specifically, teachers were attuned to the linguistic features of the essays at both surface and deep levels of text, whereas students' ratings were related to fewer overall textual features and most closely associated with surface-level features.

KW - Computational linguistics

KW - Self-assessment

KW - Teacher essay evaluation

KW - Textual analysis

KW - Writing assessment

UR - http://www.scopus.com/inward/record.url?scp=84881536335&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84881536335&partnerID=8YFLogxK

M3 - Article

VL - 5

SP - 35

EP - 59

JO - Journal of Writing Research

JF - Journal of Writing Research

SN - 2030-1006

IS - 1

ER -