Search

Why you can 'hear' words inside your head - BBC News

sulionjaka.blogspot.com
When we have concious thoughts, we can often hear a voice inside our heads – now new research is revealing why.

Why do we include the sounds of words in our thoughts when we think without speaking? Are they just an illusion induced by our memory of overt speech?

These questions have long pointed to a mystery – one relevant to our endeavour to identify impossible languages — those that cannot take root in the human brain. This mystery is equally relevant from a methodological perspective, since to address it requires radically changing our approach to the relationship between language and the brain. It requires shifting from identifying (by means of neuroimaging techniques) where neurons are firing to identifying what neurons are firing when we engage in linguistic tasks.

Consider this simple question: what is language made of? Sure, language consists of words and rules of combination, but from the point of view of physics, it exists in two different physical spaces – outside our brain and inside it. When it lives outside our brain, it consists of mechanical, acoustic waves of compressed and rarefied molecules of air – ie sound. When it exists inside our brain, it consists of electric waves that are the channel of communication for neurons. Waves in either case. This is the concrete stuff of which language is physically made.

There is one obvious connection between sound waves and the brain. Sound is what allows the contents of one brain, as expressed in words, to enter another brain. There are, of course, other ways for two brains to exchange linguistic information – through the eyes, via sign language, or through tactile systems such as Braille or the Tadoma Method, for example.

Sound enters us through our ears, traveling across the tympanic membrane, the three tiniest bones in our body known as the ossicles, and the Corti organ in the cochlea – a snail-shaped organ that plays a crucial role in this process. This complex system translates the acoustic signal’s mechanical vibrations into electric impulses in a very sophisticated way, decomposing the complex sound waves into the basic frequencies that characterise them. The different frequencies are then mapped onto dedicated slots in the primary auditory cortex, at which point the sound waves are replaced by electric waves.

Not all linguistic communication uses soundwaves to communicate – braille relies upon the sense of touch (Credit: Alamy)

Not all linguistic communication uses soundwaves to communicate – braille relies upon the sense of touch (Credit: Alamy)

At least since the pioneering work of Nobel Prize-winning electrophysiologist Lord Edgar Adrian we have known that no physical signal is ever completely lost when it reaches the brain. What we’ve more recently discovered is surprising: apparently electric waves preserve the shape of their corresponding sound waves in non-acoustic areas of the brain, such as in the Broca’s area, the part of the brain responsible for speech production.

These findings shed important light on the relationship between sound waves and electric waves in the brain, but almost all of them rely on one aspect of the neuropsychological processes related to language: namely, sound emission decoding. Yet we know that language can also be present in the absence of sound, when we read (just as most of you are probably experiencing at this very moment) or when we use words while thinking – in technical terms, when we engage in endophasic activity.

This simple fact immediately raises the following crucial question: what happens to the electric waves in our brain when we generate a linguistic expression without emitting any sound?

In 2014, my colleagues and I set out in search of answers. We compared the shape of the electric waves characterising the activity in the Broca’s area with the shape of the sound waves, not just when speakers were hearing sound, but also when they were reading linguistic expressions in absolute silence – that is, when the input was not acoustic at all.

Analysing inner speech is not a novel idea in neuropsychology, as we know from sources ranging from the Soviet psychologist Lev Vygotsky’s speculations on psychological development to analyses based on neuroimaging. But the technique we used to explore this phenomenon was unusual and illuminating, and the results were unexpected, to say the least.

In our experiment, data were collected by means of so-called awake surgery. This technique offers the possibility of stimulating and analysing the electrophysiological cortical activity of patients who have been awakened after a portion of their skullcap was removed. The invasive nature of this technique, the fragility of the organ involved, and the cooperation of patients in an extremely delicate emotional state make this research very difficult for obvious psychological, technical, and ethical reasons.

The surgeon who cuts the cerebral cortex to remove a tumour, for example, cannot know in advance (except in specific cases) whether cutting the cerebral tissue will interrupt a neuronal network and thus impair or destroy a cognitive, motor, or perceptual capacity that is supported or conveyed by that network. To minimise any potential damage from the surgery, then, once the patient has been anaesthetised and a portion of the skullcap has been removed to access the surgical site, the surgeon wakes the patient for a short transitional period of about 10 to 20 minutes and asks him or her to perform some simple tasks that should require their utilising the exposed cortex.

You might also like:

As they perform them, the surgeon stimulates the patient’s cortex by means of small electrodes, which causes no pain since there are no pain receptors in the brain. If the electrical stimulation in a certain portion of the cortex interferes with the performance of a given task, the surgeon knows that cutting that fragment of cortex could permanently damage the patient and can evaluate whether an alternative surgical site is available.

Stimulating the brain during "awake surgery" has allowed surgeons to determine the function of different networks of neurons (Credit: Alamy)

Stimulating the brain during "awake surgery" has allowed surgeons to determine the function of different networks of neurons (Credit: Alamy)

The patient gains an invaluable advantage from these exercises, and one that is practically impossible to obtain through any other technique. At the same time, this technique provides us with a unique opportunity to investigate brain functioning and obtain extremely important data.

First, the surgeon can establish the position where a crucial node of a neuronal network associated with a specific task is located in any given patient, which neutralises one of the major problems related to neuroimaging techniques – the fact that subjects may vary considerably as to precisely where a certain function is carried out in the brain. The surgeon can also record with progressive precision neuronal electrical activity down to the level of a single neuron – although this level is only reached in extremely rare cases with current technology.

This technique has increasingly been used for pathologies other than focal lesions – for example, cases of pharmacologically intractable epilepsy. In such cases, the surgeon can also implant temporary electrodes that, once the skullcap has been closed, provide continuous information for a lengthy period of time in an everyday environment, and information that is not limited to the scope of the operating room. This measuring method offers us a further step forward in comprehending the neurophysiological processes taking place in the brain. It provides a more precise and defined level of spatial resolution than what neuroimaging techniques are capable of and provides specific measures of electrical activity not available through indirect other means of measurement.

Let us now turn back to our experiment. Sixteen patients were asked to read linguistic expressions aloud, either isolated words or full sentences. We then compared the shape of the acoustic waves with the shape of the electric waves in the Broca’s area and observed a correlation (which was not unexpected).

The second step was crucial. We asked the patients to read the linguistic expressions again, this time without emitting any sound – they just read them in their mind. By analogy, we compared the shape of the acoustic wave with the shape of the electric wave in the Broca’s area. I should note that a signal was indeed entering the brain, but it was not a sound signal – instead, it was the light signal carried by electromagnetic waves, or, to put it more simply, a signal conveyed by the alphabetical letters we use to represent words (ie writing) but definitely not an acoustic wave.

Remarkably, we found that the shape of the electric waves recorded in a non-acoustic area of the brain when linguistic expressions are being read silently preserves the same structure as those of the mechanical sound waves of air that would have been produced if those words had actually been uttered. The two families of waves where language lives physically are then closely related – so closely in fact that the two overlap independently of the presence of sound.

The acoustic information is not implanted later, when a person needs to communicate with someone else, it is part of the code from the beginning, or at least before the production of sound takes place. It also excludes that the sensation of exploiting sound representation while reading or thinking with words is just an illusory artifact based on a remembrance of the overt speech.

The discovery that these two independent families of waves of which language is physically made strictly correlate with each other – even in non-acoustic areas and whether or not the linguistic structures are actually uttered or remain within the mind of an individual – indicates that sound plays a much more central role in language processing than was previously thought.

It is as if this unexpected correlation provided us with the missing piece of a “Rosetta stone” in which two known codes – the sound waves and the electric waves generated by sound – could be exploited to decipher a third one, the electric code generated in the absence of sound, which in turn could hopefully lead to the discovery of the “fingerprint” of human language.

The brain waves associated with processing language bear more than a passing resemblance to sound waves (Credit: Getty Images)

The brain waves associated with processing language bear more than a passing resemblance to sound waves (Credit: Getty Images)

Among the questions this discovery raises is what kind of electrical activity is elaborated in a language network (one that includes the Broca’s area) by persons who have never been able to hear any sound from birth? Can we exploit electro-cortical information to access the linguistic thinking of aphasic patients whose articulatory apparatus alone has been damaged, and hear them speak again, albeit through an artificial device? Can we get a better understanding of language used in dreaming or in patients who are in a minimally conscious state? Can we consider severe stuttering as a form of miscoordination between different sound representations in different networks and hope to intervene and cure it? Can these discoveries lead to an unethical use of devices to excerpt linguistic thought from people who do not want to communicate it?

The very fact that the majority of human communication takes place via waves may not be a casual fact – after all, waves constitute the purest system of communication since they transfer information from one entity to the other without changing the structure or the composition of the two entities. They travel through us and leave us intact, but they allow us to interpret the message borne by their momentary vibrations, provided that we have the key to decode it. It is not at all accidental that the term information is derived from the Latin root forma (shape) – to inform is to share a shape.

In his Philosophical Investigations, Ludwig Wittgenstein asked: “Is it conceivable that people should never speak an audible language, but should nevertheless talk to themselves inwardly, in the imagination?” The results of this experiment unexpectedly revive this prophetic question under a new light, and more importantly, they suggest new questions altogether.

* This article originally appeared in The MIT Press Reader, and is republished with permission. Andrea Moro is Professor of General Linguistics at the University School for Advanced Study (IUSS) in Pavia, Italy. He is the author of several books, including “The Boundaries of Babel”, “A Brief History of the Verb To Be,” and “Impossible Languages,” from which this article is adapted.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List”. A handpicked selection of stories from BBC FutureCultureWorklife, and Travel, delivered to your inbox every Friday.

Let's block ads! (Why?)



"hear" - Google News
September 30, 2020 at 08:04AM
https://ift.tt/3jhW7UH

Why you can 'hear' words inside your head - BBC News
"hear" - Google News
https://ift.tt/2KTiH6k
https://ift.tt/2Wh3f9n

Bagikan Berita Ini

0 Response to "Why you can 'hear' words inside your head - BBC News"

Post a Comment

Powered by Blogger.