In the trials, sighted people with no training or instruction were asked to match images to sounds; while the blind subjects were asked to feel textures and match them to sound. Tactile textures can be related to visual textures (patterns) like a topographic map--bright regions of an image translate to high tactile height relative to a page, while dark regions are flatter. Both groups showed an intuitive ability to identify textures and images from their associated sounds. Surprisingly, the untrained (also called "naive") group's performance was significantly above chance, and not very different from the trained.
The intuitively identified textures used in the experiments exploited the crossmodal mappings already within the vOICe encoding algorithm. "When we reverse the crossmodal mappings in the vOICe auditory-to-visual translation, the naive performance significantly decreased, showing that the mappings are important to the intuitive interpretation of the sound," explains Stiles.
"We found that using this device to look at textures - patterns of light and dark - illustrated 'intuitive' neural connections between textures and sounds, implying that there is some preexisting crossmodality," says Shimojo. One common example of crossmodality is a condition called synesthesia, in which the activation of one sense leads to a different involuntary sensory experience, such as seeing a certain color when hearing a specific sound. "Now, we have discovered that crossmodal connections, preexisting in everyone, can be used to make sensory substitution intuitive with no instruction or training."
The researchers do not exactly know yet what each sensory region of the brain is doing when processing these various signals, but they have a rough idea. "Auditory regions are activated upon hearing sound, as are the visual regions, which we think will process the sound for its spatial qualities and elements. The visual part of the brain, when processing images, maps objects to spatial location, fitting them together like a puzzle piece," Stiles says. To learn more about how the crossmodal processing happens in the brain, the group is currently using functional magnetic resonance imaging (fMRI) data to analyze the crossmodal neural network.
These pre-existing neural connections provide an important starting point for training visually impaired people to use devices that will help them see. A sighted person simply has to open their eyes, and the brain automatically processes images and information for seamless interaction with the environment. Current devices for the blind and visually impaired are not so automatic or intuitive to use, generally requiring a user's full concentration and attention to interpret information about the environment. The Shimojo lab's new finding on the role of multimodal processing and crossmodal mappings starts to address this issue.
Beyond its practical implications, Shimojo says, the research raises an important philosophical question: What is seeing?
"It seems like such an obvious question, but it gets complicated," says Shimojo. "Is seeing what happens when you open your eyes? No, because opening your eyes is not enough if the retina [the light-sensitive layer of tissue in the eye] is damaged. Is it when your visual cortex is activated? But our research has shown that the visual cortex can be activated by sound, indicating that we don't really need our eyes to see. It's very profound - we're trying to give blind people a visual experience through other senses."
The paper is titled "Auditory Sensory Substitution Is Intuitive and Automatic with Texture Stimuli" and was funded by grants from the National Science Foundation, the Della Martin Fund for Discoveries in Mental Illness, and the Japan Science and Technology Agency, Core Research for Evolution Science and Technology.
Loan Information for low income singles, families, seniors and disabled. Includes home, vehicle and personal loans.
Famous People with Disabilities - Well known people with disabilities and conditions who contributed to society.