If you were to ask 100 people to pronounce the word “cat” you could very well get 100 slightly different pronunciations. When our brains are functioning normally, we can understand the meaning of a word even if it is pronounced slightly differently from how we are used to hearing it.
However, damage to one of or both temporal lobes, located in the part of the brain above our ears, can disrupt this ability. Stroke survivors often suffer damage to these parts of the brain.
Scientists know that damage to the left temporal lobe leads to a loss of the ability to recognize words. This disorder is known as aphasia. Aphasia is a common disorder among stroke survivors, impacting up to 38% of discharged stroke patients in the United States. Damage to right temporal lobe, by contrast, leads to much subtler problems understanding language, including deficits in understanding humor, or recognizing who is talking.
Recent neuroimaging studies using techniques such as fMRI have shown the right temporal lobe also plays a role in speech recognition. But scientists do not yet fully understand what this role is. It seems that both sides of the brain process similar information about speech, but how they process this information differs significantly.
Professor Emily Myers in the Departments of Speech, Language, and Hearing Sciences and Psychological Sciences, has received a $2.3 million grant from the National Institute on Deafness and other Communication Disorders to unravel this mystery.
Myers previously discovered that the left frontal and temporal lobes work together to process and adapt to phonetic variability. Now, she is testing the fine-grained phonetic sensitivity of the right and left temporal lobes. This determines how much of a role each lobe plays in processing acute differences in speaker pronunciation.
Myers is collaborating with a team including faculty members James Magnuson, Fumiko Hoeft, and Roeland Hancock in the Department of Psychological Sciences; Rachel Theodore and Jennifer Mozeiko in the Department of Speech, Language, and Hearing Sciences; and Abner Gershon in the Department of Radiology at UConn Health.
Myers and her collaborators will use neuroimaging and transcranial magnetic stimulation (TMS) data from adults across a range of ages as well as data from stroke survivors to determine how the roles of the right and left temporal lobes differ when it comes to speech recognition.
“One thing that makes this project particularly exciting is that we are using multiple converging methods to try to understand how the brain processes speech,” Myers says. “Functional MRI can tell us what kinds of regions are brought on-board to process language in healthy brains. We can use TMS, which temporarily disrupts activity in the brain, to understand if those regions are absolutely necessary for processing speech. Then we can work with people who have had damage to right or left brain areas to understand what kinds of strengths and weaknesses they have in comprehending speech.”
Myers and her team will assess the role the right hemisphere plays in adapting to talker-specific phonetic differences. They will also test people who have right and left temporal lesions’ ability to understand speech and phonetic differences.
This work will help develop rehabilitative treatments for people with aphasia.
Emily Myers received her Ph.D. in cognitive science from Brown University in 2005. The research in her lab focuses on speech perception, cognitive neuroscience of speech and language, aphasia, and second-language acquisition. Her lab uses neuroimaging methods and standard psycholinguistic measures to understand the neural and behavioral mechanisms underlying these processes.