Artificial Intelligence Restores Speech for a Stroke Patient

An fascinating article about the use of technology to help stroke victim Ann recover her speech was written by Robin Marks and Laura Kurtzman. This is the first case of speech and facial expressions being decoded from brain impulses and transmitted by an avatar—a voice that is original to the patient.

On May 22, 2023, Ann tried to talk without using her voice. Her brain impulses are translated into synthetic speech and the facial expressions of an avatar via the communication system. She is currently working with academics at UC Berkeley and UC San Francisco to create new brain-computer technologies that may eventually enable people like her to speak more fluidly through a virtual representation that looks like a human. The combination of computer engineering and neurology is one of many innovations that the world is about to witness. A paper-thin rectangle with 253 electrodes was implanted onto the patient's brain over regions that the scientists had previously determined were crucial for speech.

Ann experienced a brainstem stroke when she was 30 years old, badly paralyzing her. Years of physical treatment were required before she could make adequate facial movements to laugh or weep. Even still, the muscles which would have given her voice stayed still. The system can decode such signals into text at a rate of around 80 words per minute, which is far faster compared to the 14 words per minute which her existing communication device can produce. The electrodes blocked brain impulses that could have gone to Ann's lips, tongue, jaw, larynx, and facial muscles had it not been for the stroke. The team recorded Ann speaking during her wedding to use as a model for their speech synthesizer algorithm, which they customized to sound like Ann's voice prior to the accident.

Making a wireless version that would eliminate Ann's need for a physical connection with the BCI (brain-computer interface) is a crucial next step for the researchers.(AK)