Exciting AI Study May Give People Their Voice Back

Alldusbrain

Three research teams surgically placed electrodes onto a human brain recently, so AI could turn patients brain signals into computer-generated speech.

Scientists constructed words and sentences from brain signals by using artificial intelligence. Some of those sentences were gibberish, but encouragingly some made sense. This has never been achieved before.

Dr. Stephen Hawking was unable to speak. He communicated by means of a trigger on his cheek, which he activated by tensing. This limited his ability to show tone or emotion or his ability for sarcasm would certainly have been hindered.

A brain-computer interface could change all of that. AI would allow for speech to be recreated directly from brain signals, this creating a more human-like voice. This significant breakthrough for people like the late Dr. Hawking or for anyone who has lost their voice through stroke or illness.

So How Did They Do It?

Scientists monitored certain parts of the brain as people read aloud, silently mouthed speech or listened to recordings. Using Artificial Intelligence, they were able to reconstruct speech that was actually understandable.

Nima Mesgarani, a computer scientist at Columbia University commented: “We are trying to work out the pattern of neurons that turn on and off at different time points and infer the speech sound.”

Every person is different so computer models must be tailored to each individual. These models need very precise data to be trained properly.  Unfortunately, data can only be gathered by opening a person’s skull. I can’t imagine a long line of volunteers.

How Is The Data Collected?

There is a very small window that researchers can use (between 20 and 30 minutes). The recordings are obviously very invasive and therefore, only patients undergoing surgery can partake.

Patients undergoing a brain tumour removal would be ideal candidates. Surgeons must monitor electrical readouts from the exposed brain in order to avoid key speech and motor areas.

An epileptic patient undergoing surgery would be a good candidate because during surgery they are implanted with electrodes to locate the origin of their seizures. Both of these scenarios leave researchers a small window to collect data.

Now for the technical bit…

The valuable data is fed through neural networks, which processes complex patterns by passing information through layers of computational “nodes.” The networks learn by adjusting connections between nodes. In the experiments, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.

In total there were three amazing research teams and their patients behind this breakthrough.

Let’s find out about them…

Study 1

The first team led by Nima Mesgarani extracted data from five people with epilepsy. Researchers analysed recordings from the auditory cortex. This particular part of the brain is active during both speech and listening. Patients simply listened to recordings of people naming digits and then the results were analysed.

Excitingly, spoken numbers were reconstructed by the computer. By using just neural data, the AI was able to speak numbers which a group of listeners identified with 75% accuracy.

Study 2

Another team, led by computer scientist Tanja Schultz extracted data from six patients undergoing brain tumour surgery. Electrodes were placed on the brain’s speech-planning and motor areas and then each patient spoke simple words aloud.

A trained network mapped the electrode readouts to the audio recordings and from there, AI reconstructed words. However, this experiment was not as successful as the first with only 40% of the words understandable.

Study 3

A research team out of San Francisco astonishingly reconstructed entire sentences from brain activity. Three epilepsy patients read simple sentences aloud and scientists then extracted data from the speech planning and motor section of the brain to analyze the results.

An online test was created to determine the success of the experiment. There were 166 volunteers who listened to the reconstructed sentences. Out of ten selections, volunteers chose which one they heard. The results were promising with some of the sentences correctly identified 80% of the time.

However, “What we’re really waiting for is how these methods are going to do when the patients can’t speak,” says Stephanie Riès, a neuroscientist at San Diego State University in California.

This is just one of many exciting applications of AI in medicine. While this research is only beginning, it has again pushed the boundaries of what was thought possible.

Leave a Comment

* Indicates a required field