关于
Miro Dudík’s research focuses on combining theoretical and applied aspects of machine learning, statistics, convex optimization and algorithms. Most recently he has worked on contextual bandits, reinforcement learning, and algorithmic fairness.
He received his PhD from Princeton in 2007. He is a co-founder of an open-source, community-driven project Fairlearn (opens in new tab), whose goal is to help data scientists improve fairness of AI systems. He is also a co-creator of the Maxent (opens in new tab) software for modeling species distributions, which is used by biologists around the world to design national parks, model impacts of climate change, and discover new species.
精选内容
Fairness-related harms in AI systems: Examples, assessment, and mitigation webinar
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.