Musical Facilitation of Speech Comprehension in Stroke (ASUF 30006112)

Project: Research project

Project Details

Description

Musical Facilitation of Speech Comprehension in Stroke (ASUF 30006112) Musical Facilitation of Speech Comprehension in Stroke Abstract: Stroke is the leading cause of serious, long-term disability in the U.S., and over one million Americans currently have language impairments (i.e. aphasia) due to stroke. This project will be the first to identify how music can facilitate stroke patients' abilities to understand everyday speech. Our findings will provide potential new avenues for aphasia rehabilitation and for effectively communicating with stroke patients with aphasia. 1. Project Description Stroke is the leading cause of serious, long-term disability in the U.S., and over one million Americans currently have language impairments (i.e. aphasia) due to stroke. Deficits in speech comprehension are particularly difficult for patients and caregivers: What if every voice you hear (and word you read) sounds like gibberish? How would loved ones communicate with you? How would you make medical decisions? How would you interact with society? This project will be the first to explore how music can facilitate stroke patients' understanding of everyday speech. It is likely that the brain networks engaged in processing music and speech may perform similar computations because language and music share a number of properties: both systems involve the perception of sequences of acoustic events that unfold over time with both rhythmic and tonal features, both involve a hierarchical structuring of the individual elements to derive a higher-order combinatorial representation, and both appear to be uniquely human biological capacities (Patel, 2007; McDermott& Hauser 2005). A great deal of previous research has focused on how music and singing can enhance stroke patients speech production abilities (e.g. Tomaino 2012; Belin et al. 1996). But, to our knowledge, no studies have explored how music can enhance receptive language abilities. We have conducted the only functional MRI (fMRI) study to ever directly compare the brain's response to speech and music within a group of healthy adults (Rogalsky et al. 2011). Sentences elicited more ventrolateral activation (as measured by the BOLD response) in the bilateral temporal lobes, whereas melodies elicited a more dorsomedial pattern extending into the parietal lobe. These findings indicate that hierarchical speech (i.e. sentences) and music (i.e. melodies) recruit distinct cortical networks. Thus, there are at least two brain networks that can process hierarchical, auditory stimuli. The project described below will use a converging methods approach (fMRI, lesion-behavior analyses, and behavioral testing) to investigate how musical networks may be able to compensate for primary speech networks damaged due to stroke.
StatusFinished
Effective start/end date4/1/1412/31/15

Funding

  • Grammy Foundation: $19,464.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.