Problem
The user needs insights into why the system did what it did.
Solution
Provide an explanation that enables the user to infer a connection between user behaviors and the system’s decision(s). Often used together with G11-E: Map user behaviors to system outputs.
Use when
- The system’s decision can be (partially) explained by input attributes. If system decisions depend on both input attributes and user behaviors, consider combining with G11-E: Map user behaviors to system outputs.
- It’s desirable for the system to be transparent and to enable the user to understand which attributes the system uses to decide its output.
- The user doesn’t understand why the system took a specific action.
- The user wants to understand the system’s logic.
- Policy or regulations require the system to make an explanation available.
How
Collaborate with an AI/ML practitioner to collect information about which input attributes informed the system:
- Identify which input attributes, as well as interactions between attributes, the system uses to determine its decisions.
- Retrieve the most important attributes for the system’s output.
The explanation might cover a specific system decision (see G11-A: Local explanations) or general system behavior (see G11-B: Global explanations).
The explanation might include, for each displayed attribute, information about its importance in the system’s decision making.
The content of local explanations can also include a set of attributes most influential to the system’s output
The content of global explanations can also include types of attributes the system uses to determine its decisions.
User benefits
- Enables the user to understand how the system connects input attributes to decisions.
- Gives the user insights into the system’s reasoning.
- Enhances user trust because of system transparency.
Common pitfalls
- It’s unclear why the important inputs are important.
- General summaries of attributes are too vague.
- It’s difficult for the user to infer the connection between system inputs and system outputs.
- It’s difficult for the user to interpret the meaning of an attribute’s importance.
- The reasoning behind the system’s decision is invalid, even though the decision itself is relevant to the user.
- Too much information in an explanation can be overwhelming to the user.