From Brain Signals To Speech!

May 6, 2019 By Alexis J, Writer
alexisjones's picture

Have you ever thought about what it would be like to live without the ability to talk?

For people suffering from paralysis, throat cancer, amyotrophic lateral sclerosis (ALS), and diseases like Parkinson’s, this is their reality. 

There are devices today that can translate eye or facial movements to spell out a letter at a time. Despite advances, these devices still only function at a maximum of ten words per minute- an extremely slow pace compared to the 100-150 words per minute of a normal conversation.

In a recent study published in Nature magazine, scientists have described a method of voice simulation that is based on brain signals.

Capturing Brain Signals

Let's first look at the role our brain plays when we speak. 

The speech center of our brain is located in the left hemisphere, specifically the Broca's area. An earlier study had shown that even before we speak, the Broca's area is buzzing with electrical activity. The parts of the brain that control the muscles in our lips, tongue, jaw, and larynx then step into action, forming the words we speak.

For the study, researchers implanted electrodes directly on to the brains of five volunteers who were undergoing neurosurgery for epilepsy (a brain disorder that causes seizures). Since the patients did not have trouble speaking, they were asked to read hundreds of sentences and data was collected on fluctuations in their brain's voltage.

Researchers were able to map the signals in the patients' brains to the sounds made by their vocal tracts. They trained the computers with the information, and soon enough, the computer algorithms were able to directly analyze the electrical pattern and convert to speech. To verify the results, 1,755 people were asked to transcribe what they heard and about 43% guessed correctly! The technology is even able to decode new sentences that the algorithm was not trained on.

What Does this Mean for the Future?

Voice assisted devices can change the lives of those living without speech. However, more research needs to be done on how the algorithm will function if it cannot be trained. In other words, could someone who does not have the ability to move their facial muscles train the algorithm to create a virtual voice?

There are other challenges as well, as the brains of patients need to be opened up to install electrodes. Researchers hope that this method of voice simulation will become as widespread as the use of cochlear implants in individuals without the ability to hear. They also predict that it can open up a new world of opportunities for the speech impaired, allowing them to communicate at the rate of a fluent speaker.

Sources: Guardian, National Geographic, nih.gov, brain.iee.org, nature.com