A 'distributed mosaic' of sound

Parallel circuitry for speech processing opens up new frontier of possibility

The brain is the maestro behind our every movement and thought, and the guide that tells us what we feel and what we know.

Most recently, the brain is telling us we still have a lot to learn.

In fact, a multi-year study produced by neuroscientist Dr. Liberty Hamilton and colleagues at UCSF and McGill University indicates that how the brain actually turns sounds into words may not happen in the way that scientists had always assumed.

Liberty Hamilton

Historically, sound signals were understood to pass through a specific set of brain regions to transform acoustic and linguistic cues into the beautiful blend known as speech and language. To understand this process more specifically, Hamilton and colleagues set out in 2014 to map how sounds become meaningful speech across the entire auditory cortex.

They recorded brain activity from electrodes implanted in patients undergoing surgery for epilepsy. The ensuing brain map of neural activity displayed clear evidence that sound information processing did not follow a step-by-step flow in the auditory cortex, which had been the prevailing view. Instead, speech seemed to have its own parallel circuitry.

"I think this gives us a clue for which brain areas might be most critical in the speech network, and which may serve other functions," said Hamilton, who holds a joint professorship at the Dell Medical School and Department of Speech, Language, and Hearing Sciences at The University of Texas at Austin.

That speech can still be processed and perceived, despite disruption or disturbance in the primary auditory cortex, offers a new frontier of treatment and possibility for individuals experiencing brain trauma without sacrificing this basic human function.

"Some of the brain areas that are part of this parallel speech network have been implicated in dyslexia and aphasia, so understanding the network better could lead to better treatments of these disorders," Hamilton said.

Hamilton's research focuses on better understanding and representing how sound and speech are processed in the brain through techniques such as neuroimaging, electrophysiology and other computational modeling. Her collaborations include a three-year, grant-funded study to activate neuroplasticity as a protecting and healing agent for adolescents undergoing surgery.

She teaches courses about language and the brain to undergraduate and graduate students at UT, and the discovery of this parallel circuitry has inspired Hamilton to reimagine ways of teaching the auditory pathway of brain areas that process speech and other sounds, and how they are connected.

"The idea of a simple transformation from tone-like sounds to full-blown speech just doesn’t quite make sense," Hamilton said. "It’s also made me wonder more about how these sound pathways interact during speech and other sound processing -- for example, if you hear speech and music at the same time."

Indeed, the brain is telling us to ask more questions.

Two points of inquiry already underway include whether these networks for speech and other sounds are present throughout the lifespan, and also what happens when someone is speaking versus just listening to sounds?

Hamilton is addressing both in her research lab along with clinicians at Dell Children’s Medical Center in Austin and Texas Children's Hospital in Houston.

"We hope that this will give us further insight into the development of speech and language, and how brain responses relate to a person’s cognitive skills in speech, reading, and understanding," Hamilton said.

Natalie England
Assistant Director, Communications