Using the Tablet Gestures and Speech of Pairs of Students to Classify Their Collaboration

Sree Aurovindh Viswanathan, Kurt VanLehn

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

Effective collaboration between student peers is not spontaneous. A system that can measure collaboration in real-time may be useful, as it could alert an instructor to pairs that need help in collaborating effectively. We tested whether superficial measures of speech and user interface actions would suffice for measuring collaboration. Pairs of students solved complex math problems while data were collected in the form of verbal interaction and user action logs from the students' tablets. We distinguished four classifications of interactivity: collaboration, cooperation, high asymmetric contribution and low asymmetric contribution. Human coders used richer data (several video streams) to choose one of these codes for each episode. Thousands of features were extracted computationally from the log and audio data. Machine learning was used to induce a detector that also assigned a code to each episode as a function of these features. Detectors for combinations of codes were induced as well. The best detector's overall accuracy was 96 percent (kappa = 0.92) compared to human coding. This high level of agreement suggests that superficial features of speech and log data do suffice for measuring collaboration. However, these results should be viewed as preliminary because the particular task may have made it relatively easy to distinguish collaboration from cooperation.

Original languageEnglish (US)
Pages (from-to)230-242
Number of pages13
JournalIEEE Transactions on Learning Technologies
Volume11
Issue number2
DOIs
StatePublished - Apr 1 2018

Fingerprint

Students
Detectors
student
User interfaces
Learning systems
verbal interaction
interactive media
user interface
coding
instructor
video
learning

Keywords

  • Collaborative learning
  • educational data mining
  • learning analytics
  • machine learning

ASJC Scopus subject areas

  • Education
  • Engineering(all)
  • Computer Science Applications

Cite this

Using the Tablet Gestures and Speech of Pairs of Students to Classify Their Collaboration. / Viswanathan, Sree Aurovindh; VanLehn, Kurt.

In: IEEE Transactions on Learning Technologies, Vol. 11, No. 2, 01.04.2018, p. 230-242.

Research output: Contribution to journalArticle

@article{02ed4f46225a4a70ac14cd734505372d,
title = "Using the Tablet Gestures and Speech of Pairs of Students to Classify Their Collaboration",
abstract = "Effective collaboration between student peers is not spontaneous. A system that can measure collaboration in real-time may be useful, as it could alert an instructor to pairs that need help in collaborating effectively. We tested whether superficial measures of speech and user interface actions would suffice for measuring collaboration. Pairs of students solved complex math problems while data were collected in the form of verbal interaction and user action logs from the students' tablets. We distinguished four classifications of interactivity: collaboration, cooperation, high asymmetric contribution and low asymmetric contribution. Human coders used richer data (several video streams) to choose one of these codes for each episode. Thousands of features were extracted computationally from the log and audio data. Machine learning was used to induce a detector that also assigned a code to each episode as a function of these features. Detectors for combinations of codes were induced as well. The best detector's overall accuracy was 96 percent (kappa = 0.92) compared to human coding. This high level of agreement suggests that superficial features of speech and log data do suffice for measuring collaboration. However, these results should be viewed as preliminary because the particular task may have made it relatively easy to distinguish collaboration from cooperation.",
keywords = "Collaborative learning, educational data mining, learning analytics, machine learning",
author = "Viswanathan, {Sree Aurovindh} and Kurt VanLehn",
year = "2018",
month = "4",
day = "1",
doi = "10.1109/TLT.2017.2704099",
language = "English (US)",
volume = "11",
pages = "230--242",
journal = "IEEE Transactions on Learning Technologies",
issn = "1939-1382",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

TY - JOUR

T1 - Using the Tablet Gestures and Speech of Pairs of Students to Classify Their Collaboration

AU - Viswanathan, Sree Aurovindh

AU - VanLehn, Kurt

PY - 2018/4/1

Y1 - 2018/4/1

N2 - Effective collaboration between student peers is not spontaneous. A system that can measure collaboration in real-time may be useful, as it could alert an instructor to pairs that need help in collaborating effectively. We tested whether superficial measures of speech and user interface actions would suffice for measuring collaboration. Pairs of students solved complex math problems while data were collected in the form of verbal interaction and user action logs from the students' tablets. We distinguished four classifications of interactivity: collaboration, cooperation, high asymmetric contribution and low asymmetric contribution. Human coders used richer data (several video streams) to choose one of these codes for each episode. Thousands of features were extracted computationally from the log and audio data. Machine learning was used to induce a detector that also assigned a code to each episode as a function of these features. Detectors for combinations of codes were induced as well. The best detector's overall accuracy was 96 percent (kappa = 0.92) compared to human coding. This high level of agreement suggests that superficial features of speech and log data do suffice for measuring collaboration. However, these results should be viewed as preliminary because the particular task may have made it relatively easy to distinguish collaboration from cooperation.

AB - Effective collaboration between student peers is not spontaneous. A system that can measure collaboration in real-time may be useful, as it could alert an instructor to pairs that need help in collaborating effectively. We tested whether superficial measures of speech and user interface actions would suffice for measuring collaboration. Pairs of students solved complex math problems while data were collected in the form of verbal interaction and user action logs from the students' tablets. We distinguished four classifications of interactivity: collaboration, cooperation, high asymmetric contribution and low asymmetric contribution. Human coders used richer data (several video streams) to choose one of these codes for each episode. Thousands of features were extracted computationally from the log and audio data. Machine learning was used to induce a detector that also assigned a code to each episode as a function of these features. Detectors for combinations of codes were induced as well. The best detector's overall accuracy was 96 percent (kappa = 0.92) compared to human coding. This high level of agreement suggests that superficial features of speech and log data do suffice for measuring collaboration. However, these results should be viewed as preliminary because the particular task may have made it relatively easy to distinguish collaboration from cooperation.

KW - Collaborative learning

KW - educational data mining

KW - learning analytics

KW - machine learning

UR - http://www.scopus.com/inward/record.url?scp=85049089355&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049089355&partnerID=8YFLogxK

U2 - 10.1109/TLT.2017.2704099

DO - 10.1109/TLT.2017.2704099

M3 - Article

VL - 11

SP - 230

EP - 242

JO - IEEE Transactions on Learning Technologies

JF - IEEE Transactions on Learning Technologies

SN - 1939-1382

IS - 2

ER -