File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning

TitleAn Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning
Authors
Keywordscloud-edge collaborative networks
deep reinforcement learning
Distributed training
offline-transfer-online
Issue Date1-May-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 5, p. 720-731 How to Cite?
Abstract

paper, we design a novel cloud-edge collaborative DRL training framework, named Offline-Transfer-Online, which is a new approach that can speed up the convergence of online DRL agents at the edge by interacting with offline agents in the cloud, with the minimum data interchanged and without relying on high-quality offline datasets. Therein, we propose a novel algorithm-independent knowledge distillation algorithm for online RL agents, by leveraging pre-trained models and the interface between agents and the environment to transfer distilled knowledge among multiple heterogeneous agents efficiently. Extensive experiments show that our algorithm can accelerate the convergence of various online agents in a double to decuple speed, with comparable reward achieved in different environments.


Persistent Identifierhttp://hdl.handle.net/10722/347242
ISSN
2023 Impact Factor: 5.6
2023 SCImago Journal Rankings: 2.340

 

DC FieldValueLanguage
dc.contributor.authorZeng, Tianyu-
dc.contributor.authorZhang, Xiaoxi-
dc.contributor.authorDuan, Jingpu-
dc.contributor.authorYu, Chao-
dc.contributor.authorWu, Chuan-
dc.contributor.authorChen, Xu-
dc.date.accessioned2024-09-20T00:30:53Z-
dc.date.available2024-09-20T00:30:53Z-
dc.date.issued2024-05-01-
dc.identifier.citationIEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 5, p. 720-731-
dc.identifier.issn1045-9219-
dc.identifier.urihttp://hdl.handle.net/10722/347242-
dc.description.abstract<p> paper, we design a novel cloud-edge collaborative DRL training framework, named Offline-Transfer-Online, which is a new approach that can speed up the convergence of online DRL agents at the edge by interacting with offline agents in the cloud, with the minimum data interchanged and without relying on high-quality offline datasets. Therein, we propose a novel algorithm-independent knowledge distillation algorithm for online RL agents, by leveraging pre-trained models and the interface between agents and the environment to transfer distilled knowledge among multiple heterogeneous agents efficiently. Extensive experiments show that our algorithm can accelerate the convergence of various online agents in a double to decuple speed, with comparable reward achieved in different environments.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Parallel and Distributed Systems-
dc.subjectcloud-edge collaborative networks-
dc.subjectdeep reinforcement learning-
dc.subjectDistributed training-
dc.subjectoffline-transfer-online-
dc.titleAn Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning-
dc.typeArticle-
dc.identifier.doi10.1109/TPDS.2024.3360438-
dc.identifier.scopuseid_2-s2.0-85184333067-
dc.identifier.volume35-
dc.identifier.issue5-
dc.identifier.spage720-
dc.identifier.epage731-
dc.identifier.eissn1558-2183-
dc.identifier.issnl1045-9219-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats