The Difference Between Laughing and Crying

When we hear the cry of a six-month-old baby, our ears promptly perk up. We look around, agitated, instinctively knowing there’s an infant in distress nearby. But how did we know the baby was upset? How did our brain decide that the cry wasn’t actually a shriek of happiness?

Undergraduate April Garbuz has learned from mentor Heather Read that it takes all kinds of scientists and engineers working together to understand how our brain functions. (Christine Buckley/UConn Photo)

By Cindy Wolfe Boynton & Christine Buckley, College of Liberal Arts & Sciences, originally published on UConn Today

When we hear the cry of a six-month-old baby, our ears promptly perk up. We look around, agitated, instinctively knowing there’s an infant in distress nearby.

But how did we know the baby was upset? How did our brain decide that the cry wasn’t actually a shriek of happiness?

If you’re a person with an auditory processing or neurodevelopmental disorder, such as autism, you might not be able to tell the difference, says Heather Read, associate professor of psychological sciences and biomedical engineering. A laugh or a sob could seem pretty similar to you.

That’s because these seemingly obvious nuances of human sound are actually an incredible feat for our brains to process. Read’s research aims to understand how different parts of our brain work together to understand – and respond to – tiny differences in the tone and rhythm of natural sounds.

“What we’re working to do is understand and map how the brain’s auditory circuits react to different vocal tones, shapes, pitches, and rhythms,” says Read. “It could lay the foundation to create therapies or computerized devices that can make the differentiation for those who can’t do it themselves.”

Funded by a four-year, $680,000 National Science Foundation grant to Read and co-PI Monty Escabi, associate professor of electrical and computer engineering, the research involves creating sophisticated computer-generated stimuli that mimic natural communication sounds.

These sounds are then played back to rats, who are trained to differentiate between the two sounds to receive a reward. Because rats are also mammals, and their babies also use vocalizations to get attention, they can give us great insight into the human brain, notes Read.

baby_small“Crying out to get attention is a very common communication behavior used by most mammals, so one of the things we’re testing is whether mammals’ brains have the ability to discriminate between actual cries and computer-generated ones,” she says.

Creating these artificial stimuli has required Read’s graduate students to spend the past year analyzing thousands of recordings of sounds to determine the exact elements of a natural cry, and which should be included in the creation of synthetic ones.

“So much is involved,” Read says. “For example, does tone always matter? How much sound length variation can occur? Is a synthetic fast-rhythm vocalization easier to detect than a synthetic slow-rhythm one?”

April Garbuz, a junior majoring in physiology and neurobiology, began working in Read’s laboratory in 2014. She works on a behavioral experiment that recently showed that the rats can differentiate fast temporal cues in sounds as well as or better than humans, making them a good proxy for humans.

“We find that abrupt onsets are an important factor,” says Read, because they give certain sounds, like a baby’s cry, a particular set of “acoustic edges” that matter to its own species. It’s what makes people’s – especially mothers’ – pulse quicken when they hear a baby’s cry.

Now Garbuz is an undergraduate team leader, supervising the work of other undergraduates. She plans to continue research in Read’s laboratory and apply to Ph.D. programs in neuroscience.

“It’s been just an amazing experience, and I want to continue to work in the neuroscience field after graduating from UConn,” she says.

Other areas of Read and Escabi’s research are using wireless technology to monitor brain activity simultaneously as animals are actively making sound discriminations. They’re also creating mathematical models to determine what aspects of neural activity patterns are critical for discriminating between the shape and rhythm of not just communication sounds, but many kinds of sounds.

The work involves neuroscientists, engineers, psychologists, and computer scientists, which Read and Garbuz say makes for a unique and robust working environment.

“It takes all kinds of scientists to do these kinds of studies,” adds Read. “It makes for a really cool environment not just for research, but for our students to learn.”

Psychological sciences department head Jim Green agrees, saying that this successful collaboration of faculty from the College of Liberal Arts and Sciences, the School of Engineering, and the UConn Health Department of Neuroscience shows how building multidisciplinary studies leads to stronger research programs.

“Complex problems often cannot be solved by a single investigator, and brain science is a truly multidisciplinary effort,” Green says. “UConn’s current brain studies have faculty from at least seven different departments, in four colleges, working together. It’s incredibly exciting.”