File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Communication-Efficient Federated Learning with Heterogeneous Devices

TitleCommunication-Efficient Federated Learning with Heterogeneous Devices
Authors
KeywordsDevice scheduling
federated Learning
knowledge aggregation
Issue Date2023
Citation
IEEE International Conference on Communications, 2023, v. 2023-May, p. 3602-3607 How to Cite?
AbstractThe conventional model aggregation-based federated learning (FL) approaches require all local models to have the same architecture and fail to support practical scenarios with heterogeneous local models. Moreover, the frequent model exchange is costly for resource-limited wireless networks since modern deep neural networks usually have over-million parameters. To tackle these challenges, we first propose a novel knowledge-aided FL (KFL) framework, which aggregates light high-level data features, namely knowledge, in the per-round learning process. The KFL allows devices to design their machine learning models independently and reduces the communication overhead in the training process. We then experimentally show that different temporal device scheduling patterns lead to considerably different learning performance. With this insight, we formulate a stochastic optimization problem for joint device scheduling and bandwidth allocation under limited devices' energy budgets and develop an efficient online algorithm to achieve an energy-learning trade-off in the learning process. Experimental results on the CIFAR-10 dataset show that the proposed KFL can reduce over 87% communication overhead while achieving better learning performance than the baselines. In addition, the proposed device scheduling algorithm converges faster than benchmark scheduling schemes.
Persistent Identifierhttp://hdl.handle.net/10722/349896
ISSN

 

DC FieldValueLanguage
dc.contributor.authorChen, Zhixiong-
dc.contributor.authorYi, Wenqiang-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorNallanathan, Arumugam-
dc.date.accessioned2024-10-17T07:01:42Z-
dc.date.available2024-10-17T07:01:42Z-
dc.date.issued2023-
dc.identifier.citationIEEE International Conference on Communications, 2023, v. 2023-May, p. 3602-3607-
dc.identifier.issn1550-3607-
dc.identifier.urihttp://hdl.handle.net/10722/349896-
dc.description.abstractThe conventional model aggregation-based federated learning (FL) approaches require all local models to have the same architecture and fail to support practical scenarios with heterogeneous local models. Moreover, the frequent model exchange is costly for resource-limited wireless networks since modern deep neural networks usually have over-million parameters. To tackle these challenges, we first propose a novel knowledge-aided FL (KFL) framework, which aggregates light high-level data features, namely knowledge, in the per-round learning process. The KFL allows devices to design their machine learning models independently and reduces the communication overhead in the training process. We then experimentally show that different temporal device scheduling patterns lead to considerably different learning performance. With this insight, we formulate a stochastic optimization problem for joint device scheduling and bandwidth allocation under limited devices' energy budgets and develop an efficient online algorithm to achieve an energy-learning trade-off in the learning process. Experimental results on the CIFAR-10 dataset show that the proposed KFL can reduce over 87% communication overhead while achieving better learning performance than the baselines. In addition, the proposed device scheduling algorithm converges faster than benchmark scheduling schemes.-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Communications-
dc.subjectDevice scheduling-
dc.subjectfederated Learning-
dc.subjectknowledge aggregation-
dc.titleCommunication-Efficient Federated Learning with Heterogeneous Devices-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICC45041.2023.10279442-
dc.identifier.scopuseid_2-s2.0-85152950801-
dc.identifier.volume2023-May-
dc.identifier.spage3602-
dc.identifier.epage3607-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats