Task 019: Robustness of Neural Systems
Event Date: | June 9, 2022 |
---|---|
Time: | 11:00 am (ET) / 8:00am (PT) |
Priority: | No |
College Calendar: | Show |
Yangsibo Huang, Princeton University
Gradient Inversion Attacks in Federated Learning: Generalizing From Image to Text
Abstract: Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated Learning, whereby malicious eavesdroppers or participants in the Federated Learning protocol can recover (partially) the clients' private data. In this talk, I will firstly walk you through our systematic evaluation [3] of current gradient inversion attacks on image data [1]. Then I will discuss these attacks' limitations of generalizing to text data, and introduce a new powerful attack we recently developed [2]. I will conclude with some practices to improve Federated Learning's robustness against gradient inversion attacks.
References:
[1] Evaluating Gradient Inversion Attacks and Defenses in Federated Learning, Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora, in NeurIPS 2021 (Oral)
[2] Recovering Private Text in Federated Learning of Language Models, Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, Danqi Chen, under Submission
[3] Our open-source library for gradient inversion evaluation: https://github.com/Princeton-SysML/GradAttack
Bio: Yangsibo Huang is a third-year Ph.D. student at Princeton University. Her research interests lie in the interface of Systems and Machine Learning, especially the topics of privacy, security, and robustness. Her recent work focuses on identifying privacy risks in Federated Learning, as well as developing potential defenses.