Speech acquisition is one of the fundamental aspects of human development, yet scientists remain mystified as to what exactly is happening inside our brains when we attach meaning to the sounds of speech.
At UConn, assistant professor of speech, language, and hearing sciences Emily Myers and her research team in the Language and Brain Lab are using advanced technologies such as functional magnetic resonance imaging (fMRI) and event-related potential electroencephalography (ERP-EEG) to better understand the neural and behavioral activity that underlie the process.
“Our feeling as we move through the world is that we take in sound and immediately it evokes meaning,” Myers says. “But actually there is a pretty complex process that has to happen between when the sound hits our eardrum and when we get to meaning.”
Myers recently received a grant to work with individuals with aphasia, a communication disorder often caused by stroke or other brain injury that results in a patient having difficulty speaking and understanding speech.
In collaboration with professor Carl Coelho and assistant professor Jennifer Mozeiko – two specialists in aphasia rehabilitation and traumatic brain injury – Myers is working to identify the neural regions in the brain that are impacted by aphasia and how that knowledge can be applied to improve treatment and recovery programs.
“We are looking at neural activation patterns in the minds of people who undergo intensive aphasia therapies,” says Myers. “If we can find out what is really going on in individuals who respond to treatment, then it may help us better understand whether people whose neural structures have been damaged would benefit from certain therapies or if other treatment protocols would be more helpful to them.”
One thing Myers and others are looking at is neuroplasticity or brain plasticity, which refers to how neural pathways and synapses in the brain change in response to changes in behavior, environment, thinking, emotions, and physical injury.
Take for instance, an individual trying to learn a second language. Myers, with funding from another grant, is studying how adults who have already acquired a native language map the sounds of words in a non-native language and assign them meaning.
“If you’re learning a second language as an adult, you are probably learning sounds that are pretty similar to sounds you already have in your native language,” Myers says. “The new sounds can be very difficult to acquire, so you have to take your existing sound and then sort the two of them out and divide them up. In a sense, it’s a new way of categorizing the world.”
By learning how our brains code and apply meaning to learned words, Myers and her team hope to better understand language processing. Their research could help develop new therapies for people with language disorders.
Previously, Myers traveled to Brown University, where she was on the faculty prior to her appointment at UConn, to use that school’s fMRI scanner to conduct her research. She says having a new fMRI scanner in Storrs strengthens what is already an outstanding cognitive science team in the College of Liberal Arts and Sciences at UConn.
“In combination with the existing academic strengths we already have, this kind of tool is really going to increase our profile,” she says.
UConn’s new fMRI scanner in the Brain Imaging Research Center has the latest software for 3 Tesla capability, which is more powerful than other versions of the scanning software being used at nearby institutions. UConn’s scanner allows researchers to focus on smaller fields within the brain that were not accessible to them before.
“This scanner allows us to get very small voxels, which are similar to pixels in a picture,” Myers says. “The smaller the voxel, the more accurate we can be in pinpointing where the activity is coming from. This scanner allows us to really dial in on very small activation patterns within those voxels.”
The analytical techniques that Myers and other researchers use also allow researchers to understand how brain regions work together.
“We can look at several regions to see if we can identify particular activation patterns when the mind is mapping one sound as compared to another,” she says. “It becomes not so much location-based as pattern-based, and that is really exciting and new when it comes to understanding coding in the brain.”