Scientists at the University of California, San Francisco, implanted electrodes into the brains of volunteers and decoded signals in cerebral speech centers to guide a computer-simulated version of their vocal tract – lips, jaw, tongue and larynx – to generate speech through a synthesizer.
Stroke, ailments such as cerebral palsy, amyotrophic lateral sclerosis , Parkinson’s disease and multiple sclerosis, brain injuries and cancer sometimes take away a person’s ability to speak. The volunteers read aloud while activity in brain regions involved in language production was tracked. The researchers discerned the vocal tract movements needed to produce the speech, and created a “virtual vocal tract” for each participant that could be controlled by their brain activity and produce synthesized speech.