I say, you say, we say: Using spoken language to model socio-cognitive processes during computer-supported collaborative problem solving

Angela E.B. Stewart, Hana Vrzakova, Chen Sun, Jade Yonehiro, Cathlyn Adele Stone, Nicholas D. Duran, Valerie Shute, Sidney K. D’Mello

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

Collaborative problem solving (CPS) is a crucial 21st century skill; however, current technologies fall short of effectively supporting CPS processes, especially for remote, computer-enabled interactions. In order to develop next-generation computer-supported collaborative systems that enhance CPS processes and outcomes by monitoring and responding to the unfolding collaboration, we investigate automated detection of three critical CPS process – construction of shared knowledge, negotiation/coordination, and maintaining team function – derived from a validated CPS framework. Our data consists of 32 triads who were tasked with collaboratively solving a challenging visual computer programming task for 20 minutes using commercial videoconferencing software. We used automatic speech recognition to generate transcripts of 11,163 utterances, which trained humans coded for evidence of the above three CPS processes using a set of behavioral indicators. We aimed to automate the trained human-raters’ codes in a team-independent fashion (current study) in order to provide automatic real-time or offline feedback (future work). We used Random Forest classifiers trained on the words themselves (bag of n-grams) or with word categories (e.g., emotions, thinking styles, social constructs) from the Linguistic Inquiry Word Count (LIWC) tool. Despite imperfect automatic speech recognition, the n-gram models achieved AUROC (area under the receiver operating characteristic curve) scores of .85, .77, and .77 for construction of shared knowledge, negotiation/coordination, and maintaining team function, respectively; these reflect 70%, 54%, and 54% improvements over chance. The LIWC-category models achieved similar scores of .82, .74, and .73 (64%, 48%, and 46% improvement over chance). Further, the LIWC model-derived scores predicted CPS outcomes more similar to human codes, demonstrating predictive validity. We discuss embedding our models in collaborative interfaces for assessment and dynamic intervention aimed at improving CPS outcomes.

Original languageEnglish (US)
Article number194
JournalProceedings of the ACM on Human-Computer Interaction
Volume3
Issue numberCSCW
DOIs
StatePublished - Nov 2019

Keywords

  • Collaborative interfaces
  • Collaborative problem solving
  • Language analysis

ASJC Scopus subject areas

  • Social Sciences (miscellaneous)
  • Human-Computer Interaction
  • Computer Networks and Communications

Fingerprint Dive into the research topics of 'I say, you say, we say: Using spoken language to model socio-cognitive processes during computer-supported collaborative problem solving'. Together they form a unique fingerprint.

  • Cite this