Problem
The user needs an explanation for why the system did what it did and an explanation for a single instance may not be sufficient.
Solution
Provide one or multiple examples of instances similar to the user’s input and their corresponding explanations.
Use when
- It is more efficient for the user to understand the AI system’s behavior from examples.
- A full explanation is too complex to communicate efficiently.
- Often useful for systems dealing with images, audio, or video.
- The user doesn’t understand why the system took a specific action.
- The user wants to understand the system’s logic.
- Policy or regulations require the system to make an explanation available.
How
Collaborate with an AI/ML practitioner to collect information about how similar examples to the user’s input can be extracted from the system’s past by:
- Defining criteria for similarity.
- Retrieving top examples similar to the user’s input, based on similarity criteria.
When selecting what examples to display, consider:
- Which dimension(s) of similarity between examples and user input to illustrate.
- Including diverse examples to illustrate multiple dimensions of similarity, avoiding redundancy.
Provide an initial set of examples and enable the user to access additional ones on demand.
User benefits
- Facilitates user understanding by showing rather than telling.
- Enables the user to understand specific AI system decisions.
- Builds user trust in the system.
- Enables the user to quickly understand the system’s reasoning.
- Understanding the system’s reasoning in turn enables the user to predict the system’s behavior and troubleshoot when the system’s behavior is undesirable.
Common pitfalls
- Criteria for similarity are not meaningful to users.Providing not enough or too many examples for the user’s context.
References
- Saleema Amershi, James Fogarty, Ashish Kapoor, and Desney Tan. 2009. Overview based example selection in end user interactive concept learning. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST ’09). Association for Computing Machinery, New York, NY, USA, 247–256. DOI:https://doi.org/10.1145/1622176.1622222
- Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). Association for Computing Machinery, New York, NY, USA, 258–262. DOI:https://doi.org/10.1145/3301275.3302289