File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Mitigating Unfairness in Differentially-Private Federated Learning

TitleMitigating Unfairness in Differentially-Private Federated Learning
Authors
Keywordsdifferential privacy
fairness
Federated learning
Issue Date31-May-2025
PublisherAssociation for Computing Machinery (ACM)
Citation
ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2025, v. 10, n. 2 How to Cite?
AbstractFederated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for non-i.i.d. data distribution in terms of the variance of model performance across clients.
Persistent Identifierhttp://hdl.handle.net/10722/361931
ISSN
2023 Impact Factor: 0.7
2023 SCImago Journal Rankings: 0.525

 

DC FieldValueLanguage
dc.contributor.authorDu, Bingqian-
dc.contributor.authorXiang, Liyao-
dc.contributor.authorWu, Chuan-
dc.date.accessioned2025-09-17T00:32:08Z-
dc.date.available2025-09-17T00:32:08Z-
dc.date.issued2025-05-31-
dc.identifier.citationACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2025, v. 10, n. 2-
dc.identifier.issn2376-3639-
dc.identifier.urihttp://hdl.handle.net/10722/361931-
dc.description.abstractFederated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for non-i.i.d. data distribution in terms of the variance of model performance across clients.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM)-
dc.relation.ispartofACM Transactions on Modeling and Performance Evaluation of Computing Systems-
dc.subjectdifferential privacy-
dc.subjectfairness-
dc.subjectFederated learning-
dc.titleMitigating Unfairness in Differentially-Private Federated Learning-
dc.typeArticle-
dc.identifier.doi10.1145/3725847-
dc.identifier.scopuseid_2-s2.0-105007099480-
dc.identifier.volume10-
dc.identifier.issue2-
dc.identifier.eissn2376-3647-
dc.identifier.issnl2376-3639-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats