Problem
The user needs an explanation for why the system did what it did, and it is important for the user to understand a particular system decision.
Solution
Make an explanation available for one specific action or decision the AI system made.
Use when
- The user wants transparency into the system’s reasoning about a specific action.
- Policy or regulations require the system to make available an explanation.
How
Get information about how the AI system made the decision. See patterns G11-B-G for different explanation styles.
If a local explanation isn’t possible (e.g., when the AI system doesn’t pass information to the UI that’s useful for a local explanation), consider advocating with your team to pursue known methods for generating such explanations.
Ensure that the representation communicates that the explanation is specific to one system decision. For example, use proximity or other principles of grouping to make that association clear to the user.
User benefits
- Enables the user to understand specific AI system decisions.
- Enables the user to quickly update their understanding of how the system behaves.
- Enables the user to understand the system’s reasoning.
- Understanding the system’s reasoning in turn enables the user to predict the system’s behavior and troubleshoot when the system’s behavior is undesirable.
Common pitfalls
- Alluding or communicating through your design that the same explanation might be true of other decisions. If the explanation does apply to multiple decisions, consider pattern G11-B: Global explanations.
- Giving explanations when the system’s confidence in that decision is low. If confidence is low, consider instead patterns for communicating uncertainty: G2-A: Match the level of precision in UI communication with system performance – Language and G2-B: Match the level of precision in UI communication with system performance – Numbers. See also Guideline 10, Scope services when in doubt.
- Too much information in an explanation can be overwhelming to the user.
- The explanation portrays the AI system as more capable than it is or mysterious and impossible to understand (e.g., “magic”).
References
- Aether working group on Transparency (2019). What are intelligibility and explanation? Who needs them, and why? How can we achieve them?
- Jennifer Wortman Vaughan, & Hanna Wallach (2020). A Human-Centered Agenda for Intelligible Machine Learning
- Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. DOI:https://doi.org/10.1145/3313831.3376219