File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPDS.2024.3360438
- Scopus: eid_2-s2.0-85184333067
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: An Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning
Title | An Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning |
---|---|
Authors | |
Keywords | cloud-edge collaborative networks deep reinforcement learning Distributed training offline-transfer-online |
Issue Date | 1-May-2024 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 5, p. 720-731 How to Cite? |
Abstract | paper, we design a novel cloud-edge collaborative DRL training framework, named Offline-Transfer-Online, which is a new approach that can speed up the convergence of online DRL agents at the edge by interacting with offline agents in the cloud, with the minimum data interchanged and without relying on high-quality offline datasets. Therein, we propose a novel algorithm-independent knowledge distillation algorithm for online RL agents, by leveraging pre-trained models and the interface between agents and the environment to transfer distilled knowledge among multiple heterogeneous agents efficiently. Extensive experiments show that our algorithm can accelerate the convergence of various online agents in a double to decuple speed, with comparable reward achieved in different environments. |
Persistent Identifier | http://hdl.handle.net/10722/347242 |
ISSN | 2023 Impact Factor: 5.6 2023 SCImago Journal Rankings: 2.340 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zeng, Tianyu | - |
dc.contributor.author | Zhang, Xiaoxi | - |
dc.contributor.author | Duan, Jingpu | - |
dc.contributor.author | Yu, Chao | - |
dc.contributor.author | Wu, Chuan | - |
dc.contributor.author | Chen, Xu | - |
dc.date.accessioned | 2024-09-20T00:30:53Z | - |
dc.date.available | 2024-09-20T00:30:53Z | - |
dc.date.issued | 2024-05-01 | - |
dc.identifier.citation | IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 5, p. 720-731 | - |
dc.identifier.issn | 1045-9219 | - |
dc.identifier.uri | http://hdl.handle.net/10722/347242 | - |
dc.description.abstract | <p> paper, we design a novel cloud-edge collaborative DRL training framework, named Offline-Transfer-Online, which is a new approach that can speed up the convergence of online DRL agents at the edge by interacting with offline agents in the cloud, with the minimum data interchanged and without relying on high-quality offline datasets. Therein, we propose a novel algorithm-independent knowledge distillation algorithm for online RL agents, by leveraging pre-trained models and the interface between agents and the environment to transfer distilled knowledge among multiple heterogeneous agents efficiently. Extensive experiments show that our algorithm can accelerate the convergence of various online agents in a double to decuple speed, with comparable reward achieved in different environments.</p> | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Parallel and Distributed Systems | - |
dc.subject | cloud-edge collaborative networks | - |
dc.subject | deep reinforcement learning | - |
dc.subject | Distributed training | - |
dc.subject | offline-transfer-online | - |
dc.title | An Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TPDS.2024.3360438 | - |
dc.identifier.scopus | eid_2-s2.0-85184333067 | - |
dc.identifier.volume | 35 | - |
dc.identifier.issue | 5 | - |
dc.identifier.spage | 720 | - |
dc.identifier.epage | 731 | - |
dc.identifier.eissn | 1558-2183 | - |
dc.identifier.issnl | 1045-9219 | - |