Repository | Book | Chapter

A design methodology for trust cue calibration in cognitive agents

Ewart J. de Visser , Marvin Cohen , Amos Freedy , Raja Parasuraman

pp. 251-262

As decision support systems have developed more advanced algorithms to support the human user, it is increasingly difficult for operators to verify and understand how the automation comes to its decision. This paper describes a design methodology to enhance operators' decision making by providing trust cues so that their perceived trustworthiness of a system matches its actual trustworthiness, thus yielding calibrated trust. These trust cues consist of visualizations to diagnose the actual trustworthiness of the system by showing the risk and uncertainty of the associated information. We present a trust cue design taxonomy that lists all possible information that can influence a trust judgment. We apply this methodology to a scenario with advanced automation that manages missions for multiple unmanned vehicles and shows specific trust cues for 5 levels of trust evidence. By focusing on both individual operator trust and the transparency of the system, our design approach allows for calibrated trust for optimal decision-making to support operators during all phases of mission execution.

Publication details

DOI: 10.1007/978-3-319-07458-0_24

Full citation:

de Visser, E. J. , Cohen, M. , Freedy, A. , Parasuraman, R. (2014)., A design methodology for trust cue calibration in cognitive agents, in R. Shumaker & S. Lackey (eds.), Virtual, augmented and mixed reality. designing and developing virtual and augmented environments, Dordrecht, Springer, pp. 251-262.

This document is unfortunately not available for download at the moment.