Human trust model of collision warning through pupillary and electromyography responses
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] This pioneering work on human trust in automation was modeled by two main physiological measurements responses to collision avoidance warning as observed by pupil and electromyography (EMG) signals long regarded as meaningful physiological responses to danger. As an advanced driver assistance system (ADAS) becomes popular, distraction-related crashes caused by frequent false warnings make drivers' trust in ADAS is likely to deteriorate. In particular, trust is one of the most important driver cognitive characteristics that can determine the willingness to rely on and use the ADAS. Hence, it is important to investigate the driver's trust changes related to the collision warning. Previous research was limited to a single physiological response, or survey responses, and ocused on measurements of simple physical reaction instead of on human trust in automation. Accordingly, the driver's trust in a collision avoidance warning system under complex driving circumstances was not well studied. This study extended and enhanced past studies to multiple physiological responses to explore driver trust in collision warning and the role trust plays in avoidance of potential hazards and vulnerability. The purpose of this research was to assess drivers' dynamic learned trust of a collision avoidance warning system through physiological responses. In this multi-phase study, the Tobii eye-tracking device and Myo armbands were used to collect pupillary and EMG responses. During phase 1 study, aftermarket ADAS devices were used to collect drivers' natural responses to the collision warnings under open road real driving. A significant pattern changing of pupil EMG data only exits when drivers responded to warning. The findings of phase 1 demonstrated that pupillary and electromyography responses could be used together as effective indicators when drivers received valuable information and chose to make a physical response to the warning. The study noted that drivers often responded only to a warning in which they identified a potential hazard in situations characterized by uncertainty and vulnerability. As the lab offering an opportunity for simulated danger while studies in natural environments occur under conditions that are largely safe, the phase 2 study was designed as laboratory-based with under controlled environmental factors, to reveal the underlying pupillary and electromyography responses under potential hazards. For the model development, the time series features of pupil dilation and EMG data were extracted as independent variables, while the frustration based trust level was set as a dependent variable. Fuzzy linear regression models were built as quantitative measures of drivers' trust in the collision warning by using pupillary and EMG data. Classification rates of different fuzzy linear regression models were compared to the traditional linear regression model in both development and validation scenarios. Results indicate that the prediction models of drivers' trust, is improved upon by this study's possibility linear regression method (PLR) with waveform length time-series feature of pupil and EMG data as inputs, to more effectively predict drivers' trust in their collision warning system. New understanding of human dynamic learned trust in collision warning systems may provide benefits by improving driving safety and the usability of ADAS. Results from this study could contribute to future software algorithm development in a next-generation smart vehicle that can identify not only potential surrounding hazards, but also drivers' trust status, in order to provide a safer driving experience. Additionally, the findings of this study are anticipated to lead to the improvement of collision warning system development to enhance safety and improved device-user interaction.
Access to files is restricted to the campuses of the University of Missouri.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. Copyright held by author.