File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep Reinforcement Learning in Cache-Aided MEC Networks

TitleDeep Reinforcement Learning in Cache-Aided MEC Networks
Authors
Issue Date2019
Citation
IEEE International Conference on Communications, 2019, v. 2019-May, article no. 8761349 How to Cite?
AbstractA novel resource allocation scheme for cache-aided mobile-edge computing (MEC) is proposed, to efficiently offer communication, storage and computing service for intensive computation and sensitive latency computational tasks. In this paper, the considered resource allocation problem is formulated as a mixed integer non-linear program (MINLP) that involves a joint optimization of tasks offloading decision, cache allocation, computation allocation, and dynamic power distribution. To tackle this non-trivial problem, Markov decision process (MDP) is invoked for mobile users and the access point (AP) to learn the optimal offloading and resource allocation policy from historical experience and automatically improve allocation efficiency. In particular, to break the curse of high dimensionality in the state space of MDP, a deep reinforcement learning (DRL) algorithm is proposed to solve this optimization problem with low complexity. Moreover, extensive simulations demonstrate that the proposed algorithm is capable of achieving a quasi-optimal performance under various system setups, and significantly outperform the other representative benchmark methods considered. The effectiveness of the proposed algorithm is confirmed from the comparison with the results of the optimal solution.
Persistent Identifierhttp://hdl.handle.net/10722/349339
ISSN

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhong-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorTyson, Gareth-
dc.date.accessioned2024-10-17T06:57:52Z-
dc.date.available2024-10-17T06:57:52Z-
dc.date.issued2019-
dc.identifier.citationIEEE International Conference on Communications, 2019, v. 2019-May, article no. 8761349-
dc.identifier.issn1550-3607-
dc.identifier.urihttp://hdl.handle.net/10722/349339-
dc.description.abstractA novel resource allocation scheme for cache-aided mobile-edge computing (MEC) is proposed, to efficiently offer communication, storage and computing service for intensive computation and sensitive latency computational tasks. In this paper, the considered resource allocation problem is formulated as a mixed integer non-linear program (MINLP) that involves a joint optimization of tasks offloading decision, cache allocation, computation allocation, and dynamic power distribution. To tackle this non-trivial problem, Markov decision process (MDP) is invoked for mobile users and the access point (AP) to learn the optimal offloading and resource allocation policy from historical experience and automatically improve allocation efficiency. In particular, to break the curse of high dimensionality in the state space of MDP, a deep reinforcement learning (DRL) algorithm is proposed to solve this optimization problem with low complexity. Moreover, extensive simulations demonstrate that the proposed algorithm is capable of achieving a quasi-optimal performance under various system setups, and significantly outperform the other representative benchmark methods considered. The effectiveness of the proposed algorithm is confirmed from the comparison with the results of the optimal solution.-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Communications-
dc.titleDeep Reinforcement Learning in Cache-Aided MEC Networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICC.2019.8761349-
dc.identifier.scopuseid_2-s2.0-85070237774-
dc.identifier.volume2019-May-
dc.identifier.spagearticle no. 8761349-
dc.identifier.epagearticle no. 8761349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats