In a groundbreaking development in the field of artificial intelligence, high-performance brain-computer interfaces (BCIs) are now enabling paralyzed and non-verbal individuals to communicate effectively. Based on two recent studies published in the journal Nature, these advanced BCIs, implanted into the brain, have the ability to decode brain signals and give voice to those who have lost their natural ability to communicate.
One of the subjects of these studies is Ann Johnson, a 30-year-old woman who suffered a brainstem stroke 18 years ago. Despite regaining some facial movement, the stroke left her unable to speak. After learning about the success of BCI technology in helping a paralyzed man named Pancho communicate, Johnson reached out to the researchers at UCSF.
The team at UCSF embarked on an ambitious project to decode Johnson’s brain signals into audible speech, along with the movements that animate a person’s face during conversation. According to Dr. David Moses, an assistant professor of neurological surgery at the UCSF Weill Institute for Neurosciences, this new research represents a significant leap forward.
The technology used in this study is approximately five times faster than that used with Pancho. Johnson can now communicate at about 78 words per minute, with a vocabulary of over 1,000 words. Unlike previous studies where brain signals were translated into text, this time researchers were able to translate the brain signals directly into audible speech consonants and vowels.
The process involved creating a voice profile from video footage taken before Johnson’s injury. The team implanted a thin rectangle of 253 electrodes onto her brain’s surface, which intercepted the brain signals that would normally control the muscles in her lips, tongue, jaw, larynx, and face. These signals were then sent to a bank of computers via a cable connected to a port on her head.
Over several weeks, the system’s artificial intelligence algorithms were trained to recognize Johnson’s unique brain signals for speech. This involved repeating phrases from a 1,024-word conversational vocabulary until the computer could recognize the brain activity patterns associated with all basic speech sounds.
Johnson was also given the opportunity to choose a digital avatar. When she attempts to speak silently, the AI model transforms the brain activity to animate the avatar with her words and simulated voice, along with matching expressions for happiness, sadness, or surprise.
In another study published in Nature on August 23, Stanford researchers implanted electrodes deeper into the brain of Pat Bennett, a 68-year-old former human resources director diagnosed with amyotrophic lateral sclerosis (ALS) in 2012. The disease caused paralysis beginning in her brain stem rather than her spinal cord.
The Stanford team placed four tiny sensors in two separate regions used in speech production on Bennett’s brain surface. These sensors are attached to fine gold wires that exit through pedestals screwed to the skull and are then connected by cable to a computer.
State-of-the-art decoding software and an AI algorithm receive and decode electronic information from Bennett’s brain. Over time, the system taught itself to distinguish the distinct brain activity associated with her attempts to formulate each of the 39 phonemes that make up spoken English.
The system feeds its “best guess” concerning the sequence of Bennett’s attempted phonemes into a language model which converts the streams of sounds into the sequence of words they represent. Even if some phonemes are wrongly interpreted, it can still make a good guess based on how it has been trained.
These studies represent a significant breakthrough in the use of artificial intelligence and electronics in medical science. Through programming languages and coding, researchers have been able to create BCIs that can help paralyzed individuals regain their ability to communicate effectively. This is not just a testament to the power of AI and computers but also a beacon of hope for those struggling with paralysis and loss of speech.