Make clear why the system did what it did
Enable the user to access an explanation of why the AI system behaved as it did.
Enable the user to access an explanation of why the AI system behaved as it did.
Make available an explanation for the AI system’s actions/outputs as appropriate.
Apply this guideline judiciously, keeping in mind that the mere presence of an explanation has been shown to increase user trust. This may cause over-reliance on the system and over-inflated expectations. Over-inflated expectations can lead to trusting an AI even when it could be wrong (automation bias) For setting expectations, see also Guideline 1 and Guideline 2.
The explanation can be global, explaining the entire system, or local, explaining each output. Mix and match explanation patterns as needed, keeping in mind that not all explanations are equal in every scenario. Studies have shown that explanations’ content and design significantly impact whether they help or distract people from achieving their goals.
Use tools such as InterpretML to improve model explainability.
Use Guideline 11 patterns (mix and match as appropriate) to explain the AI system’s behavior:
- G11-A: Local explanations
- G11-B: Global explanations
- G11-C: Present properties of system outputs
- G11-D: Map input attributes to system outputs
- G11-E: Map user behaviors to system outputs
- G11-F: Example-based explanations
- G11-G: “What if?” explanations