How the mysteries of language are being mapped

When it comes to the nitty gritty of how our brains construct and comprehend language, scientists are still very much in the dark.

 Estibaliz Blanco, research assistant at the neuroscience of language laboratory, NYUAD, talks about brain mapping. Ravindranath K / The National
Powered by automated translation

“The Babel fish is small, yellow, leech-like, and probably the oddest thing in the universe. It feeds on brain-wave energy … excreting telepathically a matrix formed from the conscious frequencies and nerve signals picked up from the speech centres of the brain, the practical upshot of which is that if you stick one in your ear, you can instantly understand anything said to you in any form of language.”

Handy. But Douglas Adams’ Babel fish, like many other forms of automatic, real-time translation, is very much the stuff of science fiction.

We’re getting there. Automatic translation is here, and getting better all the time. Google Translate is a handy tool, as – sometimes – is Apple’s personal assistant Siri.

Last month, Microsoft unveiled a work-in-progress, real-time Skype translator.

But there remains an altogether thornier problem: what’s going on with those nerve signals and speech centres? And when it comes to the nitty gritty of how our brains construct and comprehend language, scientists are still very much in the dark.

“The defining characteristic of human language is that we can take these little pieces, words and parts of words, and build them into larger meaningful constituents,” says Dr Liina Pylkkanen, an associate professor of linguistics and psychology at New York University (NYU).

“However, the process of how the brain accomplishes that is something that we really don’t understand much at all at this point.”

Dr Pylkkanen and her colleagues at NYU’s neuroscience of language laboratory are trying to solve this puzzle by using a technique known as magnetoencephalography (Meg), with much of the cutting-edge work being conducted in Abu Dhabi.

The Meg machine in Abu Dhabi – the first of its kind in the Arabian Gulf – is a brain scanner that can detect minute magnetic fluctuations from neural reactions in the brain, with extreme sensitivity, at millisecond resolution.

Sensors positioned at more than 200 points are housed in a thermally insulated casing that covers the whole head, allowing the scanner not only to detect the level of activity, but the region of the brain that is active during different cognitive processes.

I recently visited NYU’s Centre for Science and Engineering in Mussaffah to participate in one of the group’s experiments, which focuses on how our brains process compound numbers, and whether such processes are similar to the way our brains form simple phrases.

After having sensors taped to my forehead, I lie down with my head in a non-invasive brain scanner (that is, one that thankfully did not require drilling holes in my skull), within a booth that screens out all other magnetic fields.

A screen above me flashed various combinations of words and numbers, which I read aloud as the sensors recorded what was going on inside my brain.

“Whenever you’re thinking of something, the neurons in your brain create a very small electric current, and we measure the magnetic field that is created around that current,” says Estibaliz Blanco, a researcher at the laboratory.

“Depending on what brain process is going on, whether you’re for example adding two numbers, building a complex sentence or trying to switch between two tasks, the brain process is different and the part of your brain used for it varies as well.”

The data generated from the experiment are still being processed. The researchers were, however, happy to confirm that they did detect a brain within my skull.

In addition to experiments involving English-language speakers, the lab has expanded into similar research involving native Arabic speakers, with students from NYU and UAE University in Al Ain.

Much of this research – the first of its kind in the Middle East - has been conducted by Meera Al Kaabi, an Emirati student who is pursuing her doctorate in linguistics at NYU in the United States.

“The morphological structure of Arabic is very interesting, as it’s the three-letter root that holds the meaning of the word, and the vowels and pattern around that root that indicate whether it’s a verb or a noun or an adjective and so on,” says Laura Gwilliams, a researcher at the laboratory.

“Theories in Arabic suggest that in terms of how words are structured in your mind you only hold the representation of the consonants, and the vowels are used later to know how to interpret the consonants.”

The laboratory has also conducted research on how the brains of subjects who are bilingual in Arabic and English process the two languages, with the results due to be published at a conference shortly.

Much of the group’s research to date suggests that the neurological processes for digesting Arabic and English are broadly similar.

“All languages involve the same kind of decomposition of words into roots and frameworks,” says Dr Alec Marantz, a professor of linguistics and psychology at NYU. “The work in Arabic is very important because on the surface it looks like an extremely different system of morphology and word structure.

“What we’re trying to show is that it looks different, but if you look at in a more general way it’s actually pretty similar.”

The group is hoping to do similar work involving the south Indian Dravidian languages – which include Tamil and Malayalam – and Austronesian languages such as Tagalog, Malay and many Indonesian languages.

While Dr Pylkkanen and Dr Marantz have been engaging in such research over the past 10 years and made some interesting findings, NYU’s work remains very much at the basic-science phase.

However, the team hopes that its research will have valuable clinical applications such as helping to provide insights on brain functions of individuals with autism and other developmental or acquired language impairment, or people who have suffered brain damage from car accidents or diabetes.

“In the long run, the more practical applications of the research have to do with understanding situations where there is some kind of language deficit in a subject, either via brain damage, or cases where linguistic development is lost to a certain extent,” says Dr Pylkkanen.

“In such cases we can’t even begin to understand such deficits and how to treat them if we don’t have some kind of a rudimentary characterisation of what happens within an intact brain.”

jeverington@thenational.ae