File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Cache-Aided NOMA Mobile Edge Computing: A Reinforcement Learning Approach

TitleCache-Aided NOMA Mobile Edge Computing: A Reinforcement Learning Approach
Authors
KeywordsBayesian learning automata (BLA)
mobile edge computing (MEC)
multi-Agent Q-learning (MAQ-learning)
non-orthogonal multiple access (NOMA)
Issue Date2020
Citation
IEEE Transactions on Wireless Communications, 2020, v. 19, n. 10, p. 6899-6915 How to Cite?
AbstractA novel non-orthogonal multiple access (NOMA) based cache-Aided mobile edge computing (MEC) framework is proposed. For the purpose of efficiently allocating communication and computation resources to users' computation tasks requests, we propose a long-short-Term memory (LSTM) network to predict the task popularity. Based on the predicted task popularity, a long-Term reward maximization problem is formulated that involves a joint optimization of the task offloading decisions, computation resource allocation, and caching decisions. To tackle this challenging problem, a single-Agent Q-learning (SAQ-learning) algorithm is invoked to learn a long-Term resource allocation strategy. Furthermore, a Bayesian learning automata (BLA) based multi-Agent Q-learning (MAQ-learning) algorithm is proposed for task offloading decisions. More specifically, a BLA based action select scheme is proposed for the agents in MAQ-learning to select the optimal action in every state. We prove that the BLA based action selection scheme is instantaneously self-correcting and the selected action is an optimal solution for each state. Extensive simulation results demonstrate that: 1) The prediction error of the proposed LSTMs based task popularity prediction decreases with increasing learning rate. 2) The proposed framework significantly outperforms the benchmarks like all local computing, all offloading computing and non-cache computing. 3) The proposed BLA based MAQ-learning achieves an improved performance compared to conventional MAQ-learning algorithm.
Persistent Identifierhttp://hdl.handle.net/10722/349475
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhong-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorAl-Dhahir, Naofal-
dc.date.accessioned2024-10-17T06:58:46Z-
dc.date.available2024-10-17T06:58:46Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2020, v. 19, n. 10, p. 6899-6915-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/349475-
dc.description.abstractA novel non-orthogonal multiple access (NOMA) based cache-Aided mobile edge computing (MEC) framework is proposed. For the purpose of efficiently allocating communication and computation resources to users' computation tasks requests, we propose a long-short-Term memory (LSTM) network to predict the task popularity. Based on the predicted task popularity, a long-Term reward maximization problem is formulated that involves a joint optimization of the task offloading decisions, computation resource allocation, and caching decisions. To tackle this challenging problem, a single-Agent Q-learning (SAQ-learning) algorithm is invoked to learn a long-Term resource allocation strategy. Furthermore, a Bayesian learning automata (BLA) based multi-Agent Q-learning (MAQ-learning) algorithm is proposed for task offloading decisions. More specifically, a BLA based action select scheme is proposed for the agents in MAQ-learning to select the optimal action in every state. We prove that the BLA based action selection scheme is instantaneously self-correcting and the selected action is an optimal solution for each state. Extensive simulation results demonstrate that: 1) The prediction error of the proposed LSTMs based task popularity prediction decreases with increasing learning rate. 2) The proposed framework significantly outperforms the benchmarks like all local computing, all offloading computing and non-cache computing. 3) The proposed BLA based MAQ-learning achieves an improved performance compared to conventional MAQ-learning algorithm.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.subjectBayesian learning automata (BLA)-
dc.subjectmobile edge computing (MEC)-
dc.subjectmulti-Agent Q-learning (MAQ-learning)-
dc.subjectnon-orthogonal multiple access (NOMA)-
dc.titleCache-Aided NOMA Mobile Edge Computing: A Reinforcement Learning Approach-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TWC.2020.3006922-
dc.identifier.scopuseid_2-s2.0-85092763769-
dc.identifier.volume19-
dc.identifier.issue10-
dc.identifier.spage6899-
dc.identifier.epage6915-
dc.identifier.eissn1558-2248-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats