File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2025.3646639
- Scopus: eid_2-s2.0-105025941048
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks
| Title | Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks |
|---|---|
| Authors | |
| Keywords | Data Privacy Federated Learning Gradient Inversion Attacks |
| Issue Date | 22-Dec-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, p. 1-17 How to Cite? |
| Abstract | Federated Learning (FL) has emerged as a promising privacy-preserving collaborative model training paradigm without sharing raw data. However, recent studies have revealed that private information can still be leaked through shared gradient information and attacked by Gradient Inversion Attacks (GIA). While many GIA methods have been proposed, a detailed analysis, evaluation, and summary of these methods are still lacking. Although various survey papers summarize existing privacy attacks in FL, few studies have conducted extensive experiments to unveil the effectiveness of GIA and their associated limiting factors in this context. To fill this gap, we first undertake a systematic review of GIA and categorize existing methods into three types, i.e., optimization-based GIA (OP-GIA), generation-based GIA (GEN-GIA), and analytics-based GIA (ANA-GIA). Then, we comprehensively analyze and evaluate the three types of GIA in FL, providing insights into the factors that influence their performance, practicality, and potential threats. Our findings indicate that OP-GIA is the most practical attack setting despite its unsatisfactory performance, while GEN-GIA has many dependencies and ANA-GIA is easily detectable, making them both impractical. Finally, we offer a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection and share some future research directions from the perspectives of attackers and defenders that we believe should be pursued. We hope that our study can help researchers design more robust FL frameworks to defend against these attacks. |
| Persistent Identifier | http://hdl.handle.net/10722/368404 |
| ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Guo, Pengxin | - |
| dc.contributor.author | Wang, Runxi | - |
| dc.contributor.author | Zeng, Shuang | - |
| dc.contributor.author | Zhu, Jinjing | - |
| dc.contributor.author | Jiang, Haoning | - |
| dc.contributor.author | Wang, Yanran | - |
| dc.contributor.author | Zhou, Yuyin | - |
| dc.contributor.author | Wang, Feifei | - |
| dc.contributor.author | Xiong, Hui | - |
| dc.contributor.author | Qu, Liangqiong | - |
| dc.date.accessioned | 2026-01-06T00:35:28Z | - |
| dc.date.available | 2026-01-06T00:35:28Z | - |
| dc.date.issued | 2025-12-22 | - |
| dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, p. 1-17 | - |
| dc.identifier.issn | 0162-8828 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/368404 | - |
| dc.description.abstract | Federated Learning (FL) has emerged as a promising privacy-preserving collaborative model training paradigm without sharing raw data. However, recent studies have revealed that private information can still be leaked through shared gradient information and attacked by Gradient Inversion Attacks (GIA). While many GIA methods have been proposed, a detailed analysis, evaluation, and summary of these methods are still lacking. Although various survey papers summarize existing privacy attacks in FL, few studies have conducted extensive experiments to unveil the effectiveness of GIA and their associated limiting factors in this context. To fill this gap, we first undertake a systematic review of GIA and categorize existing methods into three types, i.e., optimization-based GIA (OP-GIA), generation-based GIA (GEN-GIA), and analytics-based GIA (ANA-GIA). Then, we comprehensively analyze and evaluate the three types of GIA in FL, providing insights into the factors that influence their performance, practicality, and potential threats. Our findings indicate that OP-GIA is the most practical attack setting despite its unsatisfactory performance, while GEN-GIA has many dependencies and ANA-GIA is easily detectable, making them both impractical. Finally, we offer a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection and share some future research directions from the perspectives of attackers and defenders that we believe should be pursued. We hope that our study can help researchers design more robust FL frameworks to defend against these attacks. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Data Privacy | - |
| dc.subject | Federated Learning | - |
| dc.subject | Gradient Inversion Attacks | - |
| dc.title | Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TPAMI.2025.3646639 | - |
| dc.identifier.scopus | eid_2-s2.0-105025941048 | - |
| dc.identifier.spage | 1 | - |
| dc.identifier.epage | 17 | - |
| dc.identifier.eissn | 1939-3539 | - |
| dc.identifier.issnl | 0162-8828 | - |
