Skip to main content
European Commission logo
ESARDA
  • Scientific paper

The Impacts of Explainability and Confidence on User Trust of a Digital Assistant

ESARDA Bulletin - The International Journal of Nuclear Safeguards and Non-Proliferation

Details

Identification
DOI: 10.3011/ESARDA.IJNSNP.2025.2
Publication date
1 October 2025
Author
Joint Research Centre

Description

Volume: 67, December 2025, pages 18-36

Authors: Jamie L. Corama, Zoe N. Gastelumb, Breannan C. Howellc, Kristin M. Divisd

a Sandia National Laboratories, All-Source Analytics Department, 1515 Eubank Blvd SE, Albuquerque, NM, USA 87123
b Sandia National Laboratories, Proliferation Detection Technologies Department, 1515 Eubank Blvd SE, Albuquerque, NM, USA 87123
c Sandia National Laboratories, Applied Cognitive Science Department, 1515 Eubank Blvd SE, Albuquerque, NM, USA 87123
d Sandia National Laboratories, Radar ISR Advanced Exploitation and Human-Systems Integration Department, 1515 Eubank Blvd SE, Albuquerque, NM, USA 87123
 

Abstract: Due to increasing demands on International Atomic Energy Agency (IAEA) safeguards inspectors in the field, applications to assist inspectors are being developed and evaluated across the safeguards research community. Several capabilities are currently being developed for eventual inclusion in a safeguards-specific digital assistant. Despite these impressive advances, the development of many of these tools has focused on the core functionality of the application rather than on the overall system performance of the human-machine team. In this paper, we present research focused on improving overall system performance by establishing appropriate levels of user trust in voice-controlled digital assistants (which we call voice user interfaces, or VUIs). We hypothesize that the performance of the human-machine team will be most efficient when human users trust the machine sufficiently to be able to benefit from its performance-enhancing capabilities but not so much as to become reliant on the machine’s recommendations and complacent in the face of potential errors. We conducted a series of human performance experiments with participants from the general population that measured human trust in a VUI intended for use during IAEA safeguards inspections. Focusing specifically on trust in the VUI, our experiments manipulated two safeguards-relevant trust factors – explanation and confidence – in the context of a seal examination task. We describe the results of our experiments and conclude with recommendations for the community for developing and refining VUIs to maximize the opportunity for system performance within the safeguards domain.

Keywords: Trust, Artificial Intelligence, International Safeguards, Explainability, Confidence, Seal Examination

Reference guideline: Jamie L. Coram, Zoe N. Gastelum, Breannan C. Howell, Kristin M. Divis ( 2025, December). The Impacts of Explainability and Confidence on User Trust of a Digital Assistant, ESARDA Bulletin - The International Journal of Nuclear Safeguards and Non-proliferation, 67, 18-36. https://doi.org/10.3011/ESARDA.IJNSNP.2025.2 

The Impacts of Explainability and Confidence on User Trust of a Digital Assistant

Files

  • 2 OCTOBER 2025
The Impacts of Explainability and Confidence on User Trust of a Digital Assistant