File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Securing Distributed SGD Against Gradient Leakage Threats

TitleSecuring Distributed SGD Against Gradient Leakage Threats
Authors
Keywordsdistributed system
Federated learning
gradient leakage attack
privacy analysis
Issue Date2023
Citation
IEEE Transactions on Parallel and Distributed Systems, 2023, v. 34, n. 7, p. 2040-2054 How to Cite?
AbstractThis paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). First, we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. Next, we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. Finally, we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks.
Persistent Identifierhttp://hdl.handle.net/10722/343425
ISSN
2023 Impact Factor: 5.6
2023 SCImago Journal Rankings: 2.340

 

DC FieldValueLanguage
dc.contributor.authorWei, Wenqi-
dc.contributor.authorLiu, Ling-
dc.contributor.authorZhou, Jingya-
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorWu, Yanzhao-
dc.date.accessioned2024-05-10T09:08:02Z-
dc.date.available2024-05-10T09:08:02Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Parallel and Distributed Systems, 2023, v. 34, n. 7, p. 2040-2054-
dc.identifier.issn1045-9219-
dc.identifier.urihttp://hdl.handle.net/10722/343425-
dc.description.abstractThis paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). First, we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. Next, we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. Finally, we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Parallel and Distributed Systems-
dc.subjectdistributed system-
dc.subjectFederated learning-
dc.subjectgradient leakage attack-
dc.subjectprivacy analysis-
dc.titleSecuring Distributed SGD Against Gradient Leakage Threats-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPDS.2023.3273490-
dc.identifier.scopuseid_2-s2.0-85159809711-
dc.identifier.volume34-
dc.identifier.issue7-
dc.identifier.spage2040-
dc.identifier.epage2054-
dc.identifier.eissn1558-2183-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats