illustrations of two gears turning
July 20, 2020 - July 23, 2020

Frontiers in Machine Learning 2020

9:00 AM–12:30 PM Pacific

Location: Virtual

Wednesday, July 22, 2020

Theme: Interpretability and Explanation

Time (PDT) Session Title Speaker / Talk Title
9:00 AM–10:30 AM Machine Learning Reliability and Robustness
[Video (opens in new tab)]

Session Lead: Besmira Nushi (opens in new tab), Microsoft

Session Abstract: As Machine Learning (ML) systems are increasingly becoming part of user-facing applications, their reliability and robustness are key to building and maintaining trust with users and customers, especially for high-stake domains. While advances in learning are continuously improving model performance on expectation, there is an emergent need for identifying, understanding, and mitigating cases where models may fail in unexpected ways. This session is going to discuss ML reliability and robustness from both a theoretical and empirical perspective. In particular, the session will aim at summarizing important ongoing work that focuses on reliability guarantees but also on how such guarantees translate (or not) to real-world applications. Further, the talks and the panel will aim at discussing (1) properties of ML algorithms that make them more preferable than others from a reliability and robustness lens such as interpretability, consistency, transportability etc. and (2) tooling support that is needed for ML developers to check and build for reliable and robust ML. The discussion will be grounded on real-world applications of ML in vision and language tasks, healthcare, and decision making.

Thomas Dietterich (opens in new tab), Oregon State University
Anomaly Detection in Machine Learning and Computer Vision

Ece Kamar (opens in new tab), Microsoft
AI in the Open World: Discovering Blind Spots of AI

Suchi Saria (opens in new tab), Johns Hopkins University
Implementing Safe & Reliable ML: 3 key areas of development

Q&A panel with all 3 speakers

10:30 AM–11:00 AM BREAK
11:00 AM–12:30 PM Saving Lives with Interpretable ML
[Video (opens in new tab)]

Session Lead: Rich Caruana (opens in new tab), Microsoft

Session Abstract: This session is about Saving Lives Using Interpretable Machine Learning in HealthCare. It’s critical to make sure healthcare models are safe to deploy. One challenge is that most patients are receiving treatment and that affects the data. A model might learn high blood pressure is good for you because the treatment given when you have blood pressure lowers risk compared to healthier patients with lower blood pressure. There are many ways confounding can cause models to predict crazy things. In the first presentation Rich Caruana will talk about problems that we see in healthcare data thanks to interpretable machine learning. In the second presentation, Ankur Teredesai from UW will talk about Fairness in Machine Learning for HealthCare. And in the last presentation Marzyeh Ghassemi from Toronto will talk about how Interpretable, Explainable, and Transparent AI can be Dangerous in HealthCare. Looks like an exciting lineup, so please join us!

Rich Caruana (opens in new tab), Microsoft
Saving Lives with Interpretable Machine Learning

Ankur Teredesai (opens in new tab), University of Washington
Fairness in Healthcare AI

Marzyeh Ghassemi (opens in new tab), University of Toronto
Expl-AI-n Yourself: The False Hope of Explainable Machine Learning in Healthcare