Researchers have created a gadget capable of converting thought-related vocalizations into audible speech instantaneously.
Even though it was still in an experimental phase, they were hopeful that the brain-computer interface might one day enable communication for individuals who had lost their ability to talk.
In a recent study, researchers tested a device on a 47-year-old woman affected by quadriplegia, who had been unable to speak for 18 years following a stroke. The implantation was conducted through surgical procedures as part of an ongoing clinical trial.
Are you curious about the most significant issues and global trends? Find out here with SCMP Knowledge Our latest platform offers carefully selected content including explanations, frequently asked questions, detailed analysis, and informative infographics, all provided by our acclaimed team.
"It transforms her intention to talk into smooth sentences," stated Gopala Anumanchipalli, who was one of the authors of the study published on Monday in the scientific journal Nature Neuroscience.
Other speech-focused brain-computer interfaces, or BCIs, often experience minor lags between thinking about sentences and their digital vocalization. These delays may interrupt the smoothness of dialogue, possibly resulting in misunderstandings and irritation, according to researchers.
"This represents quite a significant advancement in our area," commented Jonathan Brumberg from the Speech and Applied Neuroscience Laboratory at the University of Kansas, who did not participate in the research.
A group in California monitored the brain activity of a woman using electrodes as she internally vocalized sentences without speaking aloud.
The scientists used a synthesizer they built using her voice before her injury to create a speech sound that she would have spoken. They trained an AI model that translates neural activity into units of sound.
It works similar to existing systems used to transcribe meetings or phone calls in real time, said Anumanchipalli, of the University of California, Berkeley.
The implant itself sat on the speech centre of the brain so that it is listening in, and those signals were translated to pieces of speech that made up sentences.
It was a "streaming approach", Anumanchipalli said, with each 80-millisecond chunk of speech - about half a syllable - sent into a recorder.
"It's not waiting for a sentence to finish," Anumanchipalli said. "It's processing it on the fly."
Decoding speech that quickly had the potential to keep up with the fast pace of natural speech, said Brumberg. The use of voice samples, he added, "would be a significant advance in the naturalness of speech".
Although the project received partial funding from the National Institutes of Health, Anumanchipalli mentioned that these recent research budget reductions did not impact their work.
Further studies must be conducted before this technology can be widely adopted; however, with consistent funding, it might be accessible to patients over the coming ten years, as per his statement.
More Articles from SCMP
Taiwan has emerged as a key point of contention between Beijing and Tokyo. Here’s why.
A Jetstar flight heading from Melbourne back to Bali returned to Bali after a woman attempted to open an aircraft door during the flight.
It's time for NATO to expel the United States.
John Lee is embracing the innovative progression tied to Hong Kong’s historical connection with mainland water supply.
The article initially appeared on the South China Morning Post (www.scmp.com), which is the premier source for news coverage of China and Asia.
Copyright © 2025. South China Morning Post Publishers Ltd. All rights reserved.
EmoticonEmoticon