File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Dynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters

TitleDynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters
Authors
KeywordsDeep Reinforcement Learning
Federated learning
Quantization
Sparsification
Issue Date2024
Citation
IEEE Transactions on Network and Service Management, 2024 How to Cite?
AbstractFederated Learning (FL) is a privacy-preserving computing paradigm that enables participants to collaboratively train a global model without exchanging their raw personal data. Due to frequent communication and data heterogeneity of devices with unique local data distributions, FL faces a significant issue with slow convergence speed. To achieve fast convergence, existing methods adjust hyperparameters in FL to reduce the volume of model updates, the number of participating devices, and local iterations. However, most focus on only part of the hyperparameters and primarily rely on analytical optimization. A more integrated and dynamic coordination of all hyperparameters is needed. To address this issue, we first propose an efficient FL framework enabled by rand-m sparsification and stochastic quantization methods. For this framework, we conduct a rigorous theoretical analysis to explore the trade-offs among quantization level, sparsification level, device participation, and local iteration. To improve convergence speed, we also design a Deep Reinforcement Learning (DRL)-based strategy to dynamically coordinate these hyperparameters. Experimental results show that our method can improve convergence speed by at least 8% compared to the existing approaches.
Persistent Identifierhttp://hdl.handle.net/10722/353233
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYu, Xinlei-
dc.contributor.authorLin, Yijing-
dc.contributor.authorGao, Zhipeng-
dc.contributor.authorDu, Hongyang-
dc.contributor.authorNiyato, Dusit-
dc.date.accessioned2025-01-13T03:02:46Z-
dc.date.available2025-01-13T03:02:46Z-
dc.date.issued2024-
dc.identifier.citationIEEE Transactions on Network and Service Management, 2024-
dc.identifier.urihttp://hdl.handle.net/10722/353233-
dc.description.abstractFederated Learning (FL) is a privacy-preserving computing paradigm that enables participants to collaboratively train a global model without exchanging their raw personal data. Due to frequent communication and data heterogeneity of devices with unique local data distributions, FL faces a significant issue with slow convergence speed. To achieve fast convergence, existing methods adjust hyperparameters in FL to reduce the volume of model updates, the number of participating devices, and local iterations. However, most focus on only part of the hyperparameters and primarily rely on analytical optimization. A more integrated and dynamic coordination of all hyperparameters is needed. To address this issue, we first propose an efficient FL framework enabled by rand-m sparsification and stochastic quantization methods. For this framework, we conduct a rigorous theoretical analysis to explore the trade-offs among quantization level, sparsification level, device participation, and local iteration. To improve convergence speed, we also design a Deep Reinforcement Learning (DRL)-based strategy to dynamically coordinate these hyperparameters. Experimental results show that our method can improve convergence speed by at least 8% compared to the existing approaches.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Network and Service Management-
dc.subjectDeep Reinforcement Learning-
dc.subjectFederated learning-
dc.subjectQuantization-
dc.subjectSparsification-
dc.titleDynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/TNSM.2024.3497962-
dc.identifier.scopuseid_2-s2.0-85209891128-
dc.identifier.eissn1932-4537-
dc.identifier.isiWOS:001473161100044-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats