File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3725847
- Scopus: eid_2-s2.0-105007099480
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Mitigating Unfairness in Differentially-Private Federated Learning
| Title | Mitigating Unfairness in Differentially-Private Federated Learning |
|---|---|
| Authors | |
| Keywords | differential privacy fairness Federated learning |
| Issue Date | 31-May-2025 |
| Publisher | Association for Computing Machinery (ACM) |
| Citation | ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2025, v. 10, n. 2 How to Cite? |
| Abstract | Federated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for non-i.i.d. data distribution in terms of the variance of model performance across clients. |
| Persistent Identifier | http://hdl.handle.net/10722/361931 |
| ISSN | 2023 Impact Factor: 0.7 2023 SCImago Journal Rankings: 0.525 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Du, Bingqian | - |
| dc.contributor.author | Xiang, Liyao | - |
| dc.contributor.author | Wu, Chuan | - |
| dc.date.accessioned | 2025-09-17T00:32:08Z | - |
| dc.date.available | 2025-09-17T00:32:08Z | - |
| dc.date.issued | 2025-05-31 | - |
| dc.identifier.citation | ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2025, v. 10, n. 2 | - |
| dc.identifier.issn | 2376-3639 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361931 | - |
| dc.description.abstract | Federated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for non-i.i.d. data distribution in terms of the variance of model performance across clients. | - |
| dc.language | eng | - |
| dc.publisher | Association for Computing Machinery (ACM) | - |
| dc.relation.ispartof | ACM Transactions on Modeling and Performance Evaluation of Computing Systems | - |
| dc.subject | differential privacy | - |
| dc.subject | fairness | - |
| dc.subject | Federated learning | - |
| dc.title | Mitigating Unfairness in Differentially-Private Federated Learning | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1145/3725847 | - |
| dc.identifier.scopus | eid_2-s2.0-105007099480 | - |
| dc.identifier.volume | 10 | - |
| dc.identifier.issue | 2 | - |
| dc.identifier.eissn | 2376-3647 | - |
| dc.identifier.issnl | 2376-3639 | - |
