HAX Playbook

Plan for common human-AI interaction failures early.

What is the Playbook?

The HAX Playbook is a tool for proactively and systematically exploring common human-AI interaction failures. The Playbook enumerates failures relevant to your AI product scenario so you can design ways for your end-users to recover efficiently. The Playbook also provides practical guidance and examples on how to inexpensively simulate system behaviors for early user testing.

How do I use the Playbook?

Use the Playbook by answering a few questions about your planned system and interactively exploring the likely human-AI interaction success and failure scenarios that it generates. The Playbook also provides practical guidance and examples on how to simulate system behaviors for early user testing. These scenarios can be shared with your team or exported to your project management and tracking tools to ensure potential failures are mitigated or tested before deployment.  

The HAX Playbook is currently in preview for natural language scenarios like conversational agents and text prediction. We’d love to learn from you as we refine it. You can also explore our GitHub page to learn how to extend the Playbook to other AI scenarios.

Why the Playbook?

It’s hard to anticipate all the ways an AI system may go wrong when interacting with people in the real world before the system is built. However, there are common types of human-AI interaction failures that are foreseeable due to the nature of AI models being simplifications of the world. Methods are also continuing to be developed for inexpensively simulating AI behaviors to enable early user testing. We created the HAX Playbook to provide a low-cost way for reams to proactively identify, design for, and test human-AI failure scenarios before building fully functional systems.

Learn more about the research behind the Playbook.