Techniques for ML Model Transparency and Debugging

Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. This panel brings together experts on visualization, machine learning and human interaction to present their views as well as discuss these complicated issues.

Date:
Speakers:
Gonzalo Ramos, Daniel S. Weld, Matthew Kay, Rich Caruana
Affiliation:
Microsoft Research, University of Washington, University of Michigan, Microsoft Research

Series: Microsoft Research Faculty Summit