Study Examines Whether Virtual Twin Can Increase Trust in Smartcars
Human error is estimated to cause more than 90 percent of traffic accidents, a percentage that might be drastically reduced by the implementation of self-driving cars featuring smart systems that control most aspects of driving.
Although the potential benefits of self-driving cars have been widely touted, their success on the roadways of the near future is largely reliant on whether or not drivers are willing to trust these smart systems enough to hand over the wheel.
A new study published in Human Factors: The Journal of the Human Factors and Ergonomics Society evaluated whether the use of a virtual driver programmed to resemble the human driver could increase the level of trust and acceptance in smart cars.
“We think that the most prominent ‘bump’ in the road to successful implementation of smart cars is not the technology itself but, rather, the acceptance of that technology by the public,” notes Frank Verberne, a behavioral scientist at Eindhoven University of Technology. “Representing such complex automation technology with something that humans are familiar with—namely, a human behind the wheel—may cause it to become less of a ‘black box.’”
In “Trusting a Virtual Driver That Looks, Acts, and Thinks Like You,” Verberne and fellow human factors researchers Jaap Ham and Cees Midden introduced more than 100 participants to a virtual driver named Bob to assess their level of trust in him. The virtual driver’s face, head movements, and driving goals (e.g., comfort, speed) were either similar or dissimilar to those of the participant.
Participants planned driving routes with Bob’s help, allowed him to take the wheel in a simulator, and then indicated whether or not they trusted him. Results indicated that drivers who perceived Bob to look, act, and think like they did were much more likely to trust his abilities behind the wheel and expressed less concern over their physical safety.
Source: Human Factors and Ergonomics Society