File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/GLOBECOM46510.2021.9685621
- Scopus: eid_2-s2.0-85184366714
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Reliable Reinforcement Learning Based NOMA Schemes for URLLC
Title | Reliable Reinforcement Learning Based NOMA Schemes for URLLC |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings - IEEE Global Communications Conference, GLOBECOM, 2021 How to Cite? |
Abstract | In this paper, we propose a deep state-action-reward-state-action (SARSA) A learning approach for optimising the uplink resource allocation in non-orthogonal multiple access (NOMA) aided ultra-reliable low-latency communication (URLLC). To reduce the mean decoding error probability in time-varying network environments, this work designs a reliable learning algorithm for providing a long-term resource allocation, where the reward feedback is based on the instantaneous network performance. With the aid of the proposed algorithm, this paper addresses three main challenges of the reliable resource sharing in NOMA-URLLC networks: 1) Dynamic user clustering; 2) Instantaneous feedback system; and 3) Optimal resource allocation. All of these designs interact with the considered communication environment. The simulation outcomes show that: 1) Compared with the traditional Q learning algorithm, the proposed solution converges faster and obtains better performance; 2) NOMA assisted URLLC outperforms traditional OMA systems in terms of decoding error probabilities; and 3) The dynamic feedback system is efficient for the long-term learning process. |
Persistent Identifier | http://hdl.handle.net/10722/350032 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ahsan, Waleed | - |
dc.contributor.author | Yi, Wenqiang | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Nallanathan, Arumugam | - |
dc.date.accessioned | 2024-10-17T07:02:36Z | - |
dc.date.available | 2024-10-17T07:02:36Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings - IEEE Global Communications Conference, GLOBECOM, 2021 | - |
dc.identifier.issn | 2334-0983 | - |
dc.identifier.uri | http://hdl.handle.net/10722/350032 | - |
dc.description.abstract | In this paper, we propose a deep state-action-reward-state-action (SARSA) A learning approach for optimising the uplink resource allocation in non-orthogonal multiple access (NOMA) aided ultra-reliable low-latency communication (URLLC). To reduce the mean decoding error probability in time-varying network environments, this work designs a reliable learning algorithm for providing a long-term resource allocation, where the reward feedback is based on the instantaneous network performance. With the aid of the proposed algorithm, this paper addresses three main challenges of the reliable resource sharing in NOMA-URLLC networks: 1) Dynamic user clustering; 2) Instantaneous feedback system; and 3) Optimal resource allocation. All of these designs interact with the considered communication environment. The simulation outcomes show that: 1) Compared with the traditional Q learning algorithm, the proposed solution converges faster and obtains better performance; 2) NOMA assisted URLLC outperforms traditional OMA systems in terms of decoding error probabilities; and 3) The dynamic feedback system is efficient for the long-term learning process. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - IEEE Global Communications Conference, GLOBECOM | - |
dc.title | Reliable Reinforcement Learning Based NOMA Schemes for URLLC | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/GLOBECOM46510.2021.9685621 | - |
dc.identifier.scopus | eid_2-s2.0-85184366714 | - |
dc.identifier.eissn | 2576-6813 | - |