A Game-theoretic Framework for Privacy-preserving Federated Learning
- Xiaojin Zhang ,
- Lixin Fan ,
- Siwei Wang ,
- Wenjie Li ,
- Kai Chen ,
- Qiang Yang
ACM Transactions on Intelligent Systems and Technology | , Vol 15: pp. 1-35
In federated learning, benign participants aim to optimize a global model collaboratively. However, the risk of privacy leakage cannot be ignored in the presence of semi-honest adversaries. Existing research has focused either on designing protection mechanisms or on inventing attacking mechanisms. While the battle between defenders and attackers seems never-ending, we are concerned with one critical question: Is it possible to prevent potential attacks in advance? To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks. We name this game the federated learning privacy game (FLPG), in which neither defenders nor attackers are aware of all participants’ payoffs. To handle the incomplete information inherent in this situation, we propose associating the FLPG with an oracle that has two primary responsibilities. First, the oracle provides lower and upper bounds of the payoffs for the players. Second, the oracle acts as a correlation device, privately providing suggested actions to each player. With this novel framework, we analyze the optimal strategies of defenders and attackers. Furthermore, we derive and demonstrate conditions under which the attacker, as a rational decision-maker, should always follow the oracle’s suggestion not to attack.