File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Model Cloaking against Gradient Leakage

TitleModel Cloaking against Gradient Leakage
Authors
KeywordsFederated learning
gradient leakage
privacy analysis
Issue Date2023
Citation
Proceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 1403-1408 How to Cite?
AbstractGradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously.
Persistent Identifierhttp://hdl.handle.net/10722/343442
ISSN
2020 SCImago Journal Rankings: 0.545

 

DC FieldValueLanguage
dc.contributor.authorWei, Wenqi-
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorIlhan, Fatih-
dc.contributor.authorWu, Yanzhao-
dc.contributor.authorLiu, Ling-
dc.date.accessioned2024-05-10T09:08:10Z-
dc.date.available2024-05-10T09:08:10Z-
dc.date.issued2023-
dc.identifier.citationProceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 1403-1408-
dc.identifier.issn1550-4786-
dc.identifier.urihttp://hdl.handle.net/10722/343442-
dc.description.abstractGradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE International Conference on Data Mining, ICDM-
dc.subjectFederated learning-
dc.subjectgradient leakage-
dc.subjectprivacy analysis-
dc.titleModel Cloaking against Gradient Leakage-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICDM58522.2023.00182-
dc.identifier.scopuseid_2-s2.0-85177752010-
dc.identifier.spage1403-
dc.identifier.epage1408-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats