With an assist from Alice’s Adventures in Wonderland, researchers are getting a better handle on where and how the brain assembles individual words into full sentences when a person listens to a story being read.
In a new study, an international team of researchers, including a UGA cognitive scientist, report that a computational model based on the concept of "phrase structure” most closely matches functional magnetic resonance imaging (fMRI) data collected from participants’ brains while they listened to an audiobook of the first chapter of Lewis Carroll’s classic novel.
John Hale, UGA Arch Professor in the department of linguistics, says phrase structure implies a sequence of steps by which the brain organizes multiple words together, when this is grammatically possibly in a particular language.
Being able to match up the steps taken in the computer model with the actual brain activity measured by fMRI gives the researchers a deeper understanding of what the brain does and where it does it.
“Our computational model offers a step by step account of what specific areas of the brain are doing during language processing,” Hale said. “This brings us closer to understanding not only where but how language comprehension actually works in the brain.”
The fMRI data suggests that a specific sentence-organizing activity was going on in the left posterior temporal lobe, as well as in inferior frontal gyrus. Both regions have been associated with language since the late 19th century.
The model highlights a cooperative aspect of the brain’s language network. Hale suggests these results show — consistent with a consensus that has been growing in the field of neurolinguistics for many years — that language comprehension is rather a team sport, something that involves and maybe even requires multiple brain areas.
“Brain signals from both frontal and temporal areas are consistent with phrase structure processing. It may be that everyday language comprehension relies upon some kind of cooperation between these areas," he said.
The research is described in the September 2020 issue of the journal, Neuropsychologia, in the paper, Localizing Syntactic Predictions Using Recurrent Neural Network Grammars.
Co-authors are Jonathan Brennan, University of Michigan; Chris Dyer, DeepMind (London, England), Adhiguna Kuncoro, DeepMind and John Hale, University of Georgia.
The research was supported, in part, by grants from the U.S. National Science Foundation.
Image: Processing steps in the computer model on the left, phrase structure on the right. See Table 1 of Brennan et al.