Problem
The user needs to understand how to change their input in order to achieve a specific system output (see G11-A: Local explanations).
Solution
Enable users to simulate and experiment with alternative input values that might change the system’s decision.
Use when
- The user wants decision-making support.
- The user wants to see “what if” kinds of answers.
- The user wants to predict system decisions.
- The user wants to understand the system’s logic.
- The system has the capability to calculate specific relationships among variables.
How
Provide users the ability to simulate different system decisions by changing:
- The initial or default input by increasing or decreasing it.
- Influencing conditions to make them better or worse.
When enabling users to engage in simulations, support changing the following:
- Control – Enable users to change aspects they can realistically control.
- Actions – Enable users to modify actions rather than failures to act.
- Recency – Enable users to modify the most recent event.
It is possible to make this type of explanation actionable by providing the user additional support through recommendations for specific input values that can achieve a fixed desired output.
User benefits
Supports decision making by enabling users to understand the consequences of alternative inputs, states, or conditions.
Common pitfalls
- The cause and effect relationships in the simulation are not clear to the user.
- It is hard for the user to make a decision based on the simulation.
- In the case the simulation fails (e.g., the outcome change is unpredictable or inconsistent, or repeated input alterations don’t result in an output change), collaborate with an AI/ML practitioner to mitigate such failures. For example, establish a mechanism for detecting and mitigating repeated identical failures, and/or enable the user to provide feedback (see Guideline 15, Encourage granular feedback).
- It is not clear to the user what can be altered.
- It is not clear to the user how to alter a specific state.
References
- Byrne, R. M. (2019, August). Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19 (pp. 6276-6282).
- Ustun, Berk, Alexander Spangher, and Yang Liu. “Actionable recourse in linear classification.” Proceedings of the Conference on Fairness, Accountability, and Transparency. 2019.