Automated semantic relevance as an indicator of cognitive decline: Out-of-sample validation on a large-scale longitudinal dataset

Gabriela Stegmann, Shira Hahn, Samarth Bhandari, Kan Kawabata, Jeremy Shefner, Cayla Jessica Duncan, Julie Liss, Visar Berisha, Kimberly Mueller

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We developed and evaluated an automatically extracted measure of cognition (semantic relevance) using automated and manual transcripts of audio recordings from healthy and cognitively impaired participants describing the Cookie Theft picture from the Boston Diagnostic Aphasia Examination. We describe the rationale and metric validation. We developed the measure on one dataset and evaluated it on a large database (>2000 samples) by comparing accuracy against a manually calculated metric and evaluating its clinical relevance. The fully automated measure was accurate (r =.84), had moderate to good reliability (intra-class correlation =.73), correlated with Mini-Mental State Examination and improved the fit in the context of other automatic language features (r =.65), and longitudinally declined with age and level of cognitive impairment. This study demonstrates the use of a rigorous analytical and clinical framework for validating automatic measures of speech, and applied it to a measure that is accurate and clinically relevant.

Original languageEnglish (US)
Article numbere12294
JournalAlzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring
Volume14
Issue number1
DOIs
StatePublished - 2022

Keywords

  • algorithm
  • automatic
  • cognition
  • digital
  • language
  • longitudinal
  • speech

ASJC Scopus subject areas

  • Clinical Neurology
  • Psychiatry and Mental health

Fingerprint

Dive into the research topics of 'Automated semantic relevance as an indicator of cognitive decline: Out-of-sample validation on a large-scale longitudinal dataset'. Together they form a unique fingerprint.

Cite this