File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks

TitleExploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks
Authors
KeywordsData Privacy
Federated Learning
Gradient Inversion Attacks
Issue Date22-Dec-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, p. 1-17 How to Cite?
AbstractFederated Learning (FL) has emerged as a promising privacy-preserving collaborative model training paradigm without sharing raw data. However, recent studies have revealed that private information can still be leaked through shared gradient information and attacked by Gradient Inversion Attacks (GIA). While many GIA methods have been proposed, a detailed analysis, evaluation, and summary of these methods are still lacking. Although various survey papers summarize existing privacy attacks in FL, few studies have conducted extensive experiments to unveil the effectiveness of GIA and their associated limiting factors in this context. To fill this gap, we first undertake a systematic review of GIA and categorize existing methods into three types, i.e., optimization-based GIA (OP-GIA), generation-based GIA (GEN-GIA), and analytics-based GIA (ANA-GIA). Then, we comprehensively analyze and evaluate the three types of GIA in FL, providing insights into the factors that influence their performance, practicality, and potential threats. Our findings indicate that OP-GIA is the most practical attack setting despite its unsatisfactory performance, while GEN-GIA has many dependencies and ANA-GIA is easily detectable, making them both impractical. Finally, we offer a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection and share some future research directions from the perspectives of attackers and defenders that we believe should be pursued. We hope that our study can help researchers design more robust FL frameworks to defend against these attacks.
Persistent Identifierhttp://hdl.handle.net/10722/368404
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorGuo, Pengxin-
dc.contributor.authorWang, Runxi-
dc.contributor.authorZeng, Shuang-
dc.contributor.authorZhu, Jinjing-
dc.contributor.authorJiang, Haoning-
dc.contributor.authorWang, Yanran-
dc.contributor.authorZhou, Yuyin-
dc.contributor.authorWang, Feifei-
dc.contributor.authorXiong, Hui-
dc.contributor.authorQu, Liangqiong-
dc.date.accessioned2026-01-06T00:35:28Z-
dc.date.available2026-01-06T00:35:28Z-
dc.date.issued2025-12-22-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, p. 1-17-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/368404-
dc.description.abstractFederated Learning (FL) has emerged as a promising privacy-preserving collaborative model training paradigm without sharing raw data. However, recent studies have revealed that private information can still be leaked through shared gradient information and attacked by Gradient Inversion Attacks (GIA). While many GIA methods have been proposed, a detailed analysis, evaluation, and summary of these methods are still lacking. Although various survey papers summarize existing privacy attacks in FL, few studies have conducted extensive experiments to unveil the effectiveness of GIA and their associated limiting factors in this context. To fill this gap, we first undertake a systematic review of GIA and categorize existing methods into three types, i.e., optimization-based GIA (OP-GIA), generation-based GIA (GEN-GIA), and analytics-based GIA (ANA-GIA). Then, we comprehensively analyze and evaluate the three types of GIA in FL, providing insights into the factors that influence their performance, practicality, and potential threats. Our findings indicate that OP-GIA is the most practical attack setting despite its unsatisfactory performance, while GEN-GIA has many dependencies and ANA-GIA is easily detectable, making them both impractical. Finally, we offer a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection and share some future research directions from the perspectives of attackers and defenders that we believe should be pursued. We hope that our study can help researchers design more robust FL frameworks to defend against these attacks.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectData Privacy-
dc.subjectFederated Learning-
dc.subjectGradient Inversion Attacks-
dc.titleExploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks-
dc.typeArticle-
dc.identifier.doi10.1109/TPAMI.2025.3646639-
dc.identifier.scopuseid_2-s2.0-105025941048-
dc.identifier.spage1-
dc.identifier.epage17-
dc.identifier.eissn1939-3539-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats