There are a number of off-the-shelf driving simulators available, but none have the capabilities built into the Vanderbilt VR Adaptive Driving Intervention Architecture (VADIA). Not only is it specifically designed to teach adolescents with ASD the basic rules of the road, but VADIA also gathers information about the unique ways that they react to driving situations. This will allow the system to alter driving scenarios with varying degrees of difficulty to provide users with the training they need while keeping them engaged in the process. Ultimately, it may also help screen individuals whose deficits are too severe to drive safely.
"A number of 'high functioning' individuals with ASD do drive and studies have shown that when they are learning they tend to make certain kinds of mistakes more often than other beginning drivers. So how you train them is very important," said Nilanjan Sarkar, the professor of mechanical engineering and director of the Robotics and Autonomous Systems Lab. He heads up the project, which is described in detail in an article published online in the Transactions on Interactive Intelligent Systems.
The research setup consists of an automotive-style bucket seat, steering wheel, brake and gas pedals in front of a large, flat screen display on a height-adjustable table. The black box sitting directly below the screen is an eye-tracker that keeps track of where the driver is looking.
Participants don a headset containing electrodes that read the electrical activity of their brain (EEG) and they are hooked up to an array of physiological sensors that record the electrical activity of the driver's muscles (EMG), electrical activity of the heart (ECG), galvanic skin response, blood pressure, skin temperature and respiration. The elaborate monitoring allows the researchers to determine if the driver is engaged or bored by the simulation.
The simulator portrays a city with four different districts - downtown, residential, industrial and arboreal - that is ringed by a freeway. It is programmed with four basic types of driving scenarios: turning, merging, speed and laws. Speed scenarios involve those that require the driver to change their speed, such as entering or leaving school zones, street maintenance areas and changes in posted speed limit. Laws scenarios involve obeying traffic signs, such as stop and yield.
The software includes a number of factors that can be changed to increase or decrease the degree of difficulty involved. It can vary the speed and aggressiveness of the autonomous vehicles the driver encounters. It can vary weather conditions from sunny, overcast and rainy. It can also alter the responsiveness of the brake pedal, gas pedal and steering wheel to mimic the effect of slippery or dry pavement.
The system is designed to give drivers immediate feedback when they make mistakes. In its basic, performance mode, the simulator reacts when the driver makes a performance error such as exceeding the speed limit or failing to stop at a red light. The simulation stops and a text message is displayed on the screen and repeated audibly that explains the mistake and corrective steps the driver can take to avoid it.
In its second mode, the simulator not only reacts to performance errors, but it also reacts when the driver fails to pay attention to important elements in the scene, such as stop signs, other vehicles and pedestrians. These objects are marked in the computer and if the eye tracker determines that the driver has not looked at such an object for a period the researchers have determined as adequate, the simulation stops and issues an explanatory error message.
"One of our preliminary results is that the teenagers really like it," said Sarkar.
"This would definitely be a good teaching aide for driving, without a doubt," confirmed 16-year old Brandon Roberson, an adolescent with Asperger syndrome who has been participating in the studies. He has his learner's permit and would like to drive by himself. "Going out and doing what I want to do is something I have never been able to do because I have not been able to drive."
A preliminary study with 20 adolescents aged 13 to 18 diagnosed with ASD have confirmed Roberson's assessment. The participants were divided into two groups. Half were tested in the performance mode and half were tested in the gaze contingency mode. After six, 45-minute sessions, both groups showed improvements in performance. By the end of the test, they were completing individual driving trials more rapidly and were making fewer errors.
"Of course, we will have to show that these improvements will carry over into real life, but we have good reasons to think that it will," said Sarkar.
In a second study, described in a paper submitted to Research in Autism Spectrum Disorders, the researchers have begun using VADIA to identify critical differences in how teenagers with ASD react to driving situations compared to typically developing teenagers.
Participants included 14 age- and gender-matched adolescents: seven typically developing teenagers and seven diagnosed with ASD. Participants were given a range of tasks designed to challenge them on specific driving skills and to assess where they were looking while performing them. These were divided into three different levels of difficulty using factors such as number of vehicles on the road, degree of driver aggression and speed of simulated vehicles and different weather conditions.
"We found that the participants with ASD experienced more driving errors than the typical teenagers," said research associate Joshua Wade who conducted the studies. "They had particular problems with tasks related to turning, most of which involved a traffic light." When he investigated further, Wade determined that the drivers with ASD tended to make mistakes when they spent abnormally long times looking at the light.
The researchers also found significant differences in broad patterns of gaze between the two groups. Participants with ASD tended to look slightly higher and slightly more to the right side than the typical participants: a finding consistent with the results of a previous study. They also found that the typical participants spent more time looking at the road just in front of the vehicle and the top of the screen.
"These differences in gaze patterns are similar to the differences other studies have found between novice and experienced drivers. We also found that the drivers with ASD tended to fix their gaze on different objects, like traffic lights, for longer periods of time than the controls. This is also consistent with differences that have been seen between novice and experienced drivers," Wade said.
The engineers have also been testing the battery of biosensors that they place on their subjects. The tests that they have run indicate that they can gauge the degree of engagement that the drivers experience with an accuracy of about 80 percent.
That is good enough so the researchers can begin making the training sessions interactive.
They have developed 144 different "trials" with varying degrees of difficulty. They are programming the system so, if it senses that a participant has a high level of engagement, then it will increase the difficulty of the next trial to keep him or her from getting bored.
However, if the participant's level of engagement begins to fall then it will select an easier trial in order to keep him or her from getting too frustrated. In this fashion, they hope to optimize the experience for each individual. Their preliminary tests indicate that this approach can improve the rate at which the participants learn but, additional testing is required to confirm this conclusion.
"If this approach works, it could help large numbers of young people with ASD become independent, productive adults while significantly reducing the nation's healthcare costs," said Sarkar.
The research was supported in part by National Science Foundation grant 967170 and National Institutes of Health grant 1R01MH091102-01A1. Visit Research News @ Vanderbilt for more research news from Vanderbilt.