File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCAD.2020.3046665
- Scopus: eid_2-s2.0-85098752168
- WOS: WOS:000709074300005
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Efficient Federated Learning for Cloud-Based AIoT Applications
Title | Efficient Federated Learning for Cloud-Based AIoT Applications |
---|---|
Authors | |
Keywords | Artificial intelligence Internet of Things (AIoT) branchyNet cloud computing deep neural network (DNN) federated learning (FL) inference accuracy |
Issue Date | 2021 |
Citation | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021, v. 40, n. 11, p. 2211-2223 How to Cite? |
Abstract | As a promising method for central model training on decentralized device data without compromising user privacy, federated learning (FL) is becoming more and more popular in Internet-of-Things (IoT) design. However, due to limited computing and memory resources of devices that restrict the capabilities of hosted deep learning models, existing FL approaches for artificial intelligence IoT (AIoT) applications suffer from inaccurate prediction results. To address this problem, this article presents a collaborative Big.Little branch architecture to enable efficient FL for AIoT applications. Inspired by the architecture of BranchyNet which has multiple prediction branches, our approach deploys deep neural network (DNN) models across both cloud and AIoT devices. Our Big.Little branch model has two branches, where the big branch is deployed on cloud for strengthened prediction accuracy, and the little branches are used to fit for AIoT devices. When AIoT devices cannot make the prediction with high confidence using local little branches, they will resort to the big branch for further inference. To increase both prediction accuracy and early exit rate of Big.Little branch model, we propose a two-stage training and coinference scheme, which considers the local characteristics of AIoT scenarios. Comprehensive experiment results obtained from a real AIoT environment demonstrate the efficiency and effectiveness of our approach in terms of prediction accuracy and average inference time. |
Persistent Identifier | http://hdl.handle.net/10722/336260 |
ISSN | 2023 Impact Factor: 2.7 2023 SCImago Journal Rankings: 0.957 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Xinqian | - |
dc.contributor.author | Hu, Ming | - |
dc.contributor.author | Xia, Jun | - |
dc.contributor.author | Wei, Tongquan | - |
dc.contributor.author | Chen, Mingsong | - |
dc.contributor.author | Hu, Shiyan | - |
dc.date.accessioned | 2024-01-15T08:24:59Z | - |
dc.date.available | 2024-01-15T08:24:59Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021, v. 40, n. 11, p. 2211-2223 | - |
dc.identifier.issn | 0278-0070 | - |
dc.identifier.uri | http://hdl.handle.net/10722/336260 | - |
dc.description.abstract | As a promising method for central model training on decentralized device data without compromising user privacy, federated learning (FL) is becoming more and more popular in Internet-of-Things (IoT) design. However, due to limited computing and memory resources of devices that restrict the capabilities of hosted deep learning models, existing FL approaches for artificial intelligence IoT (AIoT) applications suffer from inaccurate prediction results. To address this problem, this article presents a collaborative Big.Little branch architecture to enable efficient FL for AIoT applications. Inspired by the architecture of BranchyNet which has multiple prediction branches, our approach deploys deep neural network (DNN) models across both cloud and AIoT devices. Our Big.Little branch model has two branches, where the big branch is deployed on cloud for strengthened prediction accuracy, and the little branches are used to fit for AIoT devices. When AIoT devices cannot make the prediction with high confidence using local little branches, they will resort to the big branch for further inference. To increase both prediction accuracy and early exit rate of Big.Little branch model, we propose a two-stage training and coinference scheme, which considers the local characteristics of AIoT scenarios. Comprehensive experiment results obtained from a real AIoT environment demonstrate the efficiency and effectiveness of our approach in terms of prediction accuracy and average inference time. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | - |
dc.subject | Artificial intelligence Internet of Things (AIoT) | - |
dc.subject | branchyNet | - |
dc.subject | cloud computing | - |
dc.subject | deep neural network (DNN) | - |
dc.subject | federated learning (FL) | - |
dc.subject | inference accuracy | - |
dc.title | Efficient Federated Learning for Cloud-Based AIoT Applications | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TCAD.2020.3046665 | - |
dc.identifier.scopus | eid_2-s2.0-85098752168 | - |
dc.identifier.volume | 40 | - |
dc.identifier.issue | 11 | - |
dc.identifier.spage | 2211 | - |
dc.identifier.epage | 2223 | - |
dc.identifier.eissn | 1937-4151 | - |
dc.identifier.isi | WOS:000709074300005 | - |