New AI decoder can translate brainwaves into text - study

This is an important step on the way to developing interfaces that can decode continuous language through non-invasive recordings of thoughts.

 Reading brain activity may soon be possible through AI decoders (illustrative). (photo credit: PIXABAY)
Reading brain activity may soon be possible through AI decoders (illustrative).
(photo credit: PIXABAY)

Scientists have developed a system that can read a person's mind and reproduce the brain activity in a stream of text, relying in part on a transformer model similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

This is an important step on the way to develop brain–computer interfaces that can decode continuous language through non-invasive recordings of thoughts.

Results were published in a recent study in the peer-reviewed journal Nature Neuroscience, led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

A non-invasive method

Tang and Huth's semantic decoder isn't implanted in the brain directly; instead, it uses fMRI machine scans to measure brain activity. For the study, participants in the experiment listened to podcasts while the AI attempted to transcribe their thoughts into text

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said Alex Huth. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

 Illustrative image of artificial intelligence. (credit: PIXABAY)
Illustrative image of artificial intelligence. (credit: PIXABAY)

These kinds of systems could be especially helpful to people who are unable to physically speak, such as those who have had a stroke, and enable them to communicate more effectively. 

According to Tang and Huth, study findings demonstrate the viability of non-invasive language brain–computer interfaces. They say that the semantic decoder still needs some more work and can only provide the basic “gist” of what someone is thinking. The AI decoder produced a text that closely matched a subject's thought only about half of the time.

The decoder in action

The study provides some examples of the decoder in action. In one case, a test subject heard, and consequently thought the sentence "... I didn't know whether to scream cry or run away instead I said leave me alone I don't need your help Adam disappeared."

The decoder reproduced this part of a sentence as "... started to scream and cry and then she just said I told you to leave me alone you can't hurt me anymore I'm sorry and then he stormed off." 

A work in progress

The researchers also added that they gave the aspect of mental privacy some concern. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said Jerry Tang. “We want to make sure people only use these types of technologies when they want to and that it helps them.”


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


For this reason, they also tested whether successful decoding requires the cooperation of the person being decoded, and found that cooperation is absolutely required for the decoder to work.

Huth and Tang believe their system could in the future be adapted to work with portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth concludes. “So, our exact kind of approach should translate to fNIRS.”