About Us Contact Us Privacy Policy
GOOD Worldwide Inc. publishing
© GOOD Worldwide Inc. All Rights Reserved.

AI gives paralyzed woman her long-lost voice back after 18 years with brain implant surgery

AI's latest breakthrough gives paralyzed woman her voice and expressions back with brain implant surgery.

AI gives paralyzed woman her long-lost voice back after 18 years with brain implant surgery
Cover Image Source: Youtube | UC San Francisco (UCSF)

Artificial Intelligence (AI) has changed the game for humans in many aspects. Things that seemed impossible before now can come to fruition because of developments in AI. It has even transformed the lives of people and a beautiful example is Ann's story. Ann had her entire life change in a matter of seconds when she got a brain stroke. At the age of 30, this brainstroke left her with no control over her body movements and muscles. All her senses were functioning but she lost the power of expression and speaking which significantly hampered the quality of her life. But, now she has gained a new beacon of hope because of researchers at UC San Francisco and UC Berkeley making rapid developments in creating a technology that would allow people like Ann to communicate freely, as per UCSF.

View this post on Instagram

A post shared by UCSF Neurosurgery (@ucsfneurosurgery)


Ann is one of the primary subjects with whom researchers at UC San Francisco and UC Berkeley are working to make all their collective dreams into reality. Their objective is to formulate new brain-computer technology that will allow people like Ann to communicate in a natural manner with the aid of a digital avatar. This is the first time a technology has been made that has the ability to synthesize speech or facial expressions through brain signals. The whole effort is advancing at a great rate, with the system now having the ability to decode signals into text at nearly 80 words per minute. This is much more than the 14 words per minute rate that is delivered by the device attached to Ann.

Image Source: Pexels/ Photo by Google DeepMind
Representative Image Source: Pexels | Google DeepMind

Edward Chang, MD, chair of neurological surgery at UCSF who has been involved in the development of this technology for over a decade now, is now patiently waiting for it to be FDA-approved. Chang explained the objective of this development by saying, “Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others. These advancements bring us much closer to making this a real solution for patients.”


Ann has been struggling since 2005 when she had the stroke to properly formulate words and express herself through her movements. Through therapy, she can breathe, laugh, display full neck movement, wink, and say a few words, but that's it. She got to know about the research when she came across the story of Pancho, who helped the team translate his brain signals into text as he attempted to speak. Pancho through this technology became the first paralyzed person to have his speech-brain signals converted into full words. After Ann came on board the team tried something more ambitious with her and also brought movements into play.


The team did a surgery on Ann which implanted a paper-thin rectangle of 253 electrodes on her brain surface specifically in areas that are responsible for speech. The function of electrodes is to receive the brain signals meant for Ann’s lips, tongue, jaw, and larynx, as well as her face. These signals after the interception will go to a bank of computers, who will then convert them for her avatar. Ann along with the team worked for weeks on the database of the machine so that it can recognize her unique brain signals for speech. It was essential that the system recognized the brain activity patterns related to the basic sounds of speech.

Image Source: Pexels/ Photo by Vidal Balielo Jr.
Representative Image Source: Pexels | Vidal Balielo Jr.

The team focussed on both accuracy and speed. Sean Metzger, who developed the text decoder with Alex Silva, both graduate students in the joint Bioengineering Program at UC Berkeley and UCSF explained, “The accuracy, speed, and vocabulary are crucial. It’s what gives Ann the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”


Ann's avatar was created by software developed by Speech Graphics, a company that makes AI-driven facial animation. This software undertook brain signals which were then converted to movements and expression on the Avatar. They also attempted to get the voice of the Avatar to resemble as closely as possible to her real voice. Thus, the device will take signals from the brain and AI will convert them into words spoken by the Avatar as well as the gestures done by it.


Ann is grateful for the impact this system is already having on her life. She says, “When I was at the rehab hospital, the speech therapist didn’t know what to do with me. Being a part of this study has given me a sense of purpose, I feel like I am contributing to society. It feels like I have a job again. It’s amazing I have lived this long; this study has allowed me to really live while I’m still alive!” She hopes that the development of this application gives her daughter the chance to finally hear her actual voice like it was years ago and bring her family some relief after battling such a devastating condition for years.

More Stories on Scoop