Differential Privacy (DP) is a key technology for computing statistics and training machine learning models over private data. Microsoft pioneered differential privacy research back in 2006 (opens in new tab). Since then DP has established itself as the de-facto standard privacy notion with a vast body of academic literature and growing number of large scale deployments across the industry. Among its many strengths, the promise of DP is intuitive to explain: no matter what the adversary knows about the data, the privacy of a single user is protected from output of the data analysis or the machine learning model.
The broad goal of Project Laplace is to enable privacy-preserving machine learning and data analysis using differential privacy. This has taken our team in two directions: 1) Mathematical and algorithmic research on the design of new differentially private algorithms. 2) Providing support to engineering teams at Microsoft that deploy DP algorithms in products. Our team currently focuses on algorithms for differentially private machine learning for NLP scenarios, algorithms for differentially private database query processing, and differentially private telemetry collection.
Our project is a part of the broader PPML initiative.
Learn more:
Assistive AI Makes Replying Easier
Differential Privacy in Workplace Analytics (opens in new tab)