Noninvasive Brain Decoder Can Transcribe Stories in the Mind
Topic: The Human Brain
Author: University of Texas at Austin
Published: 2023/05/01 - Peer-Reviewed: Yes
Contents: Summary - Introduction - Main Item - Related Topics
Synopsis: New artificial intelligence (AI) system can translate brain activity while listening to a story, or silently imagining telling a story, into text. Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. The system might help people who are mentally conscious yet unable to speak physically, such as those debilitated by strokes, to communicate intelligibly again.
Introduction
"Semantic Reconstruction of Continuous Language From Non-Invasive Brain Recordings" - Nature Neuroscience.
A new artificial intelligence system called a semantic decoder can translate a person's brain activity - while listening to a story or silently imagining telling a story - into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
Main Item
The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI's ChatGPT and Google's Bard.
Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," Huth said. "We're getting the model to decode continuous language for extended periods of time with complicated ideas."
The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant's brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.
For example, in experiments, a participant listening to a speaker say, "I don't have my driver's license yet" had their thoughts translated as, "She has not even started to learn to drive yet." Listening to the words, "I didn't know whether to scream, cry or run away. Instead, I said, 'Leave me alone!'" was decoded as, "Started to scream and cry, and then she just said, 'I told you to leave me alone.'"
Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance - for example, by thinking other thoughts - results were similarly unusable.
"We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," Tang said. "We want to make sure people only use these types of technologies when they want to and that it helps them."
In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
"fNIRS measures where there's more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring," Huth said. "So, our exact kind of approach should translate to fNIRS," although, he noted, the resolution with fNIRS would be lower.
Frequently Asked Questions
Could this technology be used on someone without them knowing, say by an authoritarian regime interrogating political prisoners or an employer spying on employees?
No. The system has to be extensively trained on a willing subject in a facility with large, expensive equipment. "A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they're listening to before this really works well on them," said Huth.
Could training be skipped altogether?
No. The researchers tested the system on people whom it hadn't been trained on and found that the results were unintelligible.
Are there ways someone can defend against having their thoughts decoded?
Yes. The researchers tested whether a person who had previously participated in training could actively resist subsequent attempts at brain decoding. Tactics like thinking of animals or quietly imagining telling their own story let participants easily and completely thwart the system from recovering the speech the person was exposed to.
What if technology and related research evolved to one day overcome these obstacles or defenses?
"I think right now, while the technology is in such an early state, it's important to be proactive by enacting policies that protect people and their privacy," Tang said. "Regulating what these devices can be used for is also very important."
About the Study:
This work was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.
The study's other co-authors are Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.
Alexander Huth and Jerry Tang have filed a PCT patent application related to this work.
Attribution/Source(s):
This peer reviewed publication was selected for publishing by the editors of Disabled World due to its significant relevance to the disability community. Originally authored by University of Texas at Austin, and published on 2023/05/01, the content may have been edited for style, clarity, or brevity. For further details or clarifications, University of Texas at Austin can be contacted at utexas.edu. NOTE: Disabled World does not provide any warranties or endorsements related to this article.
Explore Related Topics
1 - New Psychology Research Reveals How Our Brains Segment the Day into Chapters - What determines how the brain divides the day into individual events that we can understand and remember separately.
2 - Unraveling the Neuroscience: How Aging Impacts Memory Organization - Researchers uncover how memory maintenance and deletion shape cognitive decline in aging.
3 - Harnessing Mindfulness and Meditation: Using Inner Focus for Mental Well-being - Tuning into interoception, how someone senses their body’s internal state, is an important component of mindfulness training that could aid in managing mood disorders such as depression.
4 - The Cognitive Neuroscience of Dreaming: Untangling Dreams and Our Waking Lives - Neuroscientists are finding innovative new ways to study dreams and how they influence our cognition.
5 - Power of Illusion Helps to Learn New Movements - Visual aids creating illusion of movement can improve motor performance and early stages of motor learning.
Page Information, Citing and Disclaimer
Disabled World is a comprehensive online resource that provides information and news related to disabilities, assistive technologies, and accessibility issues. Founded in 2004 our website covers a wide range of topics, including disability rights, healthcare, education, employment, and independent living, with the goal of supporting the disability community and their families.
Cite This Page (APA): University of Texas at Austin. (2023, May 1). Noninvasive Brain Decoder Can Transcribe Stories in the Mind. Disabled World. Retrieved October 15, 2024 from www.disabled-world.com/health/neurology/brain/brain-decoder.php
Permalink: <a href="https://www.disabled-world.com/health/neurology/brain/brain-decoder.php">Noninvasive Brain Decoder Can Transcribe Stories in the Mind</a>: New artificial intelligence (AI) system can translate brain activity while listening to a story, or silently imagining telling a story, into text.
Disabled World provides general information only. Materials presented are never meant to substitute for qualified medical care. Any 3rd party offering or advertising does not constitute an endorsement.