File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Reliable Reinforcement Learning Based NOMA Schemes for URLLC

TitleReliable Reinforcement Learning Based NOMA Schemes for URLLC
Authors
Issue Date2021
Citation
Proceedings - IEEE Global Communications Conference, GLOBECOM, 2021 How to Cite?
AbstractIn this paper, we propose a deep state-action-reward-state-action (SARSA) A learning approach for optimising the uplink resource allocation in non-orthogonal multiple access (NOMA) aided ultra-reliable low-latency communication (URLLC). To reduce the mean decoding error probability in time-varying network environments, this work designs a reliable learning algorithm for providing a long-term resource allocation, where the reward feedback is based on the instantaneous network performance. With the aid of the proposed algorithm, this paper addresses three main challenges of the reliable resource sharing in NOMA-URLLC networks: 1) Dynamic user clustering; 2) Instantaneous feedback system; and 3) Optimal resource allocation. All of these designs interact with the considered communication environment. The simulation outcomes show that: 1) Compared with the traditional Q learning algorithm, the proposed solution converges faster and obtains better performance; 2) NOMA assisted URLLC outperforms traditional OMA systems in terms of decoding error probabilities; and 3) The dynamic feedback system is efficient for the long-term learning process.
Persistent Identifierhttp://hdl.handle.net/10722/350032
ISSN

 

DC FieldValueLanguage
dc.contributor.authorAhsan, Waleed-
dc.contributor.authorYi, Wenqiang-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorNallanathan, Arumugam-
dc.date.accessioned2024-10-17T07:02:36Z-
dc.date.available2024-10-17T07:02:36Z-
dc.date.issued2021-
dc.identifier.citationProceedings - IEEE Global Communications Conference, GLOBECOM, 2021-
dc.identifier.issn2334-0983-
dc.identifier.urihttp://hdl.handle.net/10722/350032-
dc.description.abstractIn this paper, we propose a deep state-action-reward-state-action (SARSA) A learning approach for optimising the uplink resource allocation in non-orthogonal multiple access (NOMA) aided ultra-reliable low-latency communication (URLLC). To reduce the mean decoding error probability in time-varying network environments, this work designs a reliable learning algorithm for providing a long-term resource allocation, where the reward feedback is based on the instantaneous network performance. With the aid of the proposed algorithm, this paper addresses three main challenges of the reliable resource sharing in NOMA-URLLC networks: 1) Dynamic user clustering; 2) Instantaneous feedback system; and 3) Optimal resource allocation. All of these designs interact with the considered communication environment. The simulation outcomes show that: 1) Compared with the traditional Q learning algorithm, the proposed solution converges faster and obtains better performance; 2) NOMA assisted URLLC outperforms traditional OMA systems in terms of decoding error probabilities; and 3) The dynamic feedback system is efficient for the long-term learning process.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE Global Communications Conference, GLOBECOM-
dc.titleReliable Reinforcement Learning Based NOMA Schemes for URLLC-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/GLOBECOM46510.2021.9685621-
dc.identifier.scopuseid_2-s2.0-85184366714-
dc.identifier.eissn2576-6813-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats