A cerebral implant doped for artificial intelligence allowed a paralyzed woman to translate her speech thoughts almost instantly, American researchers announced on Monday.
The system, still experimental, uses an implant connecting brain areinas to computers and could allow people who have lost the ability to communicate to regain a form of access to speech.
Decode Ann’s thoughts
The team of researchers, based in California, had previously used a brain-computer interface to decode Ann’s thoughts, a 47-year-old quadriplegic before transcribing them into words.
But the operation imposed a period of eight seconds between the moment when the patient thought of what she meant and the one when an artificial voice generated by computer expressed it.
A constraint for the conversations that Ann was led, a former professor of mathematics who lost the use of speech after a heart attack that occurred 18 years ago.
“In real time”
The new interface of the team, announced in the journal “Nature Neuroscience”, reduces the interval between its thoughts and the words to 80 milliseconds. “Our new continuous dissemination method converts brain signals into personalized voice in real time, in the second following his intention to speak,” said the main author of the study, Gopala Anumanchipalli, of the University of California.
The objective of the ex-teacher is to become an advisor of university orientation, he said. “Even if we are still far from achieving this goal for Ann, this step should ultimately allow us to considerably improve the quality of life of individuals victims of vocal paralysis,” he said.
Words pronounced in his head
For this research, we showed Ann sentences on a screen, like “you love me,” which she then pronounced in her head.
These sentences were then converted into a replica of his voice, built using recordings dating from before his accident. The patient “was enthusiastic to hear her voice and had the feeling that she was thus embodied,” said Anumanchipalli.
The brain-computer interface intercepts the cerebral signal “After we decided what we mean, after we chose what words to use and how to move the muscles of the vocal duct”, explained the researcher.
The model has taken advantage of an artificial intelligence method by deep learning with training on thousands of sentences that Ann pronounced in his head.
Vocabulary still limited
The model, which is not free from errors, works with a vocabulary still limited to 1024 words.
This research is still at the stage of “very early proof of principle”, the Professor in Neuroprosthesis Patrick Degenaar, of the British University of Newcastle (Great Britain) said in the AFP, not included in this study. While calling it “very cool”.
He noted that the method uses a bundle of electrodes that does not penetrate the brain, unlike the interface offered by the Neuralink company of the American billionaire Elon Musk.
Surgery to lay such a bundle of electrodes is quite common in hospitals specializing in the diagnosis of epilepsy. This should facilitate its deployment for patients with speech disorders, according to Pr. Degenaar.
With ad hoc research in this area, Gopala Anumanchipalli estimates that this technology could be deployable within five to ten years.