File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TNSM.2024.3497962
- Scopus: eid_2-s2.0-85209891128
- WOS: WOS:001473161100044
Supplementary
- Citations:
- Appears in Collections:
Article: Dynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters
| Title | Dynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters |
|---|---|
| Authors | |
| Keywords | Deep Reinforcement Learning Federated learning Quantization Sparsification |
| Issue Date | 2024 |
| Citation | IEEE Transactions on Network and Service Management, 2024 How to Cite? |
| Abstract | Federated Learning (FL) is a privacy-preserving computing paradigm that enables participants to collaboratively train a global model without exchanging their raw personal data. Due to frequent communication and data heterogeneity of devices with unique local data distributions, FL faces a significant issue with slow convergence speed. To achieve fast convergence, existing methods adjust hyperparameters in FL to reduce the volume of model updates, the number of participating devices, and local iterations. However, most focus on only part of the hyperparameters and primarily rely on analytical optimization. A more integrated and dynamic coordination of all hyperparameters is needed. To address this issue, we first propose an efficient FL framework enabled by rand-m sparsification and stochastic quantization methods. For this framework, we conduct a rigorous theoretical analysis to explore the trade-offs among quantization level, sparsification level, device participation, and local iteration. To improve convergence speed, we also design a Deep Reinforcement Learning (DRL)-based strategy to dynamically coordinate these hyperparameters. Experimental results show that our method can improve convergence speed by at least 8% compared to the existing approaches. |
| Persistent Identifier | http://hdl.handle.net/10722/353233 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yu, Xinlei | - |
| dc.contributor.author | Lin, Yijing | - |
| dc.contributor.author | Gao, Zhipeng | - |
| dc.contributor.author | Du, Hongyang | - |
| dc.contributor.author | Niyato, Dusit | - |
| dc.date.accessioned | 2025-01-13T03:02:46Z | - |
| dc.date.available | 2025-01-13T03:02:46Z | - |
| dc.date.issued | 2024 | - |
| dc.identifier.citation | IEEE Transactions on Network and Service Management, 2024 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/353233 | - |
| dc.description.abstract | Federated Learning (FL) is a privacy-preserving computing paradigm that enables participants to collaboratively train a global model without exchanging their raw personal data. Due to frequent communication and data heterogeneity of devices with unique local data distributions, FL faces a significant issue with slow convergence speed. To achieve fast convergence, existing methods adjust hyperparameters in FL to reduce the volume of model updates, the number of participating devices, and local iterations. However, most focus on only part of the hyperparameters and primarily rely on analytical optimization. A more integrated and dynamic coordination of all hyperparameters is needed. To address this issue, we first propose an efficient FL framework enabled by rand-m sparsification and stochastic quantization methods. For this framework, we conduct a rigorous theoretical analysis to explore the trade-offs among quantization level, sparsification level, device participation, and local iteration. To improve convergence speed, we also design a Deep Reinforcement Learning (DRL)-based strategy to dynamically coordinate these hyperparameters. Experimental results show that our method can improve convergence speed by at least 8% compared to the existing approaches. | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Network and Service Management | - |
| dc.subject | Deep Reinforcement Learning | - |
| dc.subject | Federated learning | - |
| dc.subject | Quantization | - |
| dc.subject | Sparsification | - |
| dc.title | Dynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters | - |
| dc.type | Article | - |
| dc.description.nature | published_or_final_version | - |
| dc.identifier.doi | 10.1109/TNSM.2024.3497962 | - |
| dc.identifier.scopus | eid_2-s2.0-85209891128 | - |
| dc.identifier.eissn | 1932-4537 | - |
| dc.identifier.isi | WOS:001473161100044 | - |
