File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Distributed reinforcement learning for NOMA-enabled mobile edge computing

TitleDistributed reinforcement learning for NOMA-enabled mobile edge computing
Authors
Issue Date2020
Citation
2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings, 2020, article no. 9145457 How to Cite?
AbstractA novel non-orthogonal multiple access (NOMA) enabled cache-aided mobile edge computing (MEC) framework is proposed, for minimizing the sum energy consumption. The NOMA strategy enables mobile users to offload computation tasks to the access point (AP) simultaneously, which improves the spectrum efficiency. In this article, the considered resource allocation problem is formulated as a long-term reward maximization problem that involves a joint optimization of task offloading decision, computation resource allocation, and caching decision. To tackle this nontrivial problem, a single-agent Q-learning (SAQ-learning) algorithm is invoked to learn a long-term resource allocation strategy from historical experience. Moreover, a Bayesian learning automata (BLA) based multi-agent Q-learning (MAQ-learning) algorithm is proposed for task offloading decisions. More specifically, a BLA based action select scheme is proposed for the agents in MAQ-learning to select the optimal actions in every state. The proposed BLA based action selection scheme is instantaneously self-correcting, consequently, if the probabilities of two computing models (i.e., local computing and offloading computing) are not equal, the optimal action unveils eventually. Extensive simulations demonstrate that: 1) The proposed cache-aided NOMA MEC framework significantly outperforms the other representative benchmark schemes under various network setups. 2) The effectiveness of the proposed BAL-MAQ-learning algorithm is confirmed from the comparison with the results of conventional reinforcement learning algorithms.
Persistent Identifierhttp://hdl.handle.net/10722/349464

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhong-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.date.accessioned2024-10-17T06:58:42Z-
dc.date.available2024-10-17T06:58:42Z-
dc.date.issued2020-
dc.identifier.citation2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings, 2020, article no. 9145457-
dc.identifier.urihttp://hdl.handle.net/10722/349464-
dc.description.abstractA novel non-orthogonal multiple access (NOMA) enabled cache-aided mobile edge computing (MEC) framework is proposed, for minimizing the sum energy consumption. The NOMA strategy enables mobile users to offload computation tasks to the access point (AP) simultaneously, which improves the spectrum efficiency. In this article, the considered resource allocation problem is formulated as a long-term reward maximization problem that involves a joint optimization of task offloading decision, computation resource allocation, and caching decision. To tackle this nontrivial problem, a single-agent Q-learning (SAQ-learning) algorithm is invoked to learn a long-term resource allocation strategy from historical experience. Moreover, a Bayesian learning automata (BLA) based multi-agent Q-learning (MAQ-learning) algorithm is proposed for task offloading decisions. More specifically, a BLA based action select scheme is proposed for the agents in MAQ-learning to select the optimal actions in every state. The proposed BLA based action selection scheme is instantaneously self-correcting, consequently, if the probabilities of two computing models (i.e., local computing and offloading computing) are not equal, the optimal action unveils eventually. Extensive simulations demonstrate that: 1) The proposed cache-aided NOMA MEC framework significantly outperforms the other representative benchmark schemes under various network setups. 2) The effectiveness of the proposed BAL-MAQ-learning algorithm is confirmed from the comparison with the results of conventional reinforcement learning algorithms.-
dc.languageeng-
dc.relation.ispartof2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings-
dc.titleDistributed reinforcement learning for NOMA-enabled mobile edge computing-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCWorkshops49005.2020.9145457-
dc.identifier.scopuseid_2-s2.0-85090280958-
dc.identifier.spagearticle no. 9145457-
dc.identifier.epagearticle no. 9145457-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats