File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Transmit power pool design for grant-free NOMA-IoT networks via deep reinforcement learning

TitleTransmit power pool design for grant-free NOMA-IoT networks via deep reinforcement learning
Authors
KeywordsDouble Q learning
grant-free NOMA
Internet of Things
multi-agent deep reinforcement learning
resource allocation
Issue Date2021
Citation
IEEE Transactions on Wireless Communications, 2021, v. 20, n. 11, p. 7626-7641 How to Cite?
AbstractGrant-free non-orthogonal multiple access (GF-NOMA) is a potential multiple access framework for short-packet internet-of-things (IoT) networks to enhance connectivity. However, the resource allocation problem in GF-NOMA is challenging due to the absence of closed-loop power control. We design a prototype of transmit power pool (PP) to provide open-loop power control. IoT users acquire their transmit power in advance from this prototype PP solely according to their communication distances. Firstly, a multi-agent deep Q-network (DQN) aided GF-NOMA algorithm is proposed to determine the optimal transmit power levels for the prototype PP. More specifically, each IoT user acts as an agent and learns a policy by interacting with the wireless environment that guides them to select optimal actions. Secondly, to prevent the Q-learning model overestimation problem, double DQN (DDQN) based GF-NOMA algorithm is proposed. Numerical results confirm that the DDQN based algorithm finds out the optimal transmit power levels that form the PP. Comparing with the conventional online learning approach, the proposed algorithm with the prototype PP converges faster under changing environments due to limiting the action space based on previous learning. The considered GF-NOMA system outperforms the networks with fixed transmission power, namely all the users have the same transmit power and the traditional GF with orthogonal multiple access techniques, in terms of throughput.
Persistent Identifierhttp://hdl.handle.net/10722/349602
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorFayaz, Muhammad-
dc.contributor.authorYi, Wenqiang-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorNallanathan, Arumugam-
dc.date.accessioned2024-10-17T06:59:38Z-
dc.date.available2024-10-17T06:59:38Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2021, v. 20, n. 11, p. 7626-7641-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/349602-
dc.description.abstractGrant-free non-orthogonal multiple access (GF-NOMA) is a potential multiple access framework for short-packet internet-of-things (IoT) networks to enhance connectivity. However, the resource allocation problem in GF-NOMA is challenging due to the absence of closed-loop power control. We design a prototype of transmit power pool (PP) to provide open-loop power control. IoT users acquire their transmit power in advance from this prototype PP solely according to their communication distances. Firstly, a multi-agent deep Q-network (DQN) aided GF-NOMA algorithm is proposed to determine the optimal transmit power levels for the prototype PP. More specifically, each IoT user acts as an agent and learns a policy by interacting with the wireless environment that guides them to select optimal actions. Secondly, to prevent the Q-learning model overestimation problem, double DQN (DDQN) based GF-NOMA algorithm is proposed. Numerical results confirm that the DDQN based algorithm finds out the optimal transmit power levels that form the PP. Comparing with the conventional online learning approach, the proposed algorithm with the prototype PP converges faster under changing environments due to limiting the action space based on previous learning. The considered GF-NOMA system outperforms the networks with fixed transmission power, namely all the users have the same transmit power and the traditional GF with orthogonal multiple access techniques, in terms of throughput.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.subjectDouble Q learning-
dc.subjectgrant-free NOMA-
dc.subjectInternet of Things-
dc.subjectmulti-agent deep reinforcement learning-
dc.subjectresource allocation-
dc.titleTransmit power pool design for grant-free NOMA-IoT networks via deep reinforcement learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TWC.2021.3086762-
dc.identifier.scopuseid_2-s2.0-85113701873-
dc.identifier.volume20-
dc.identifier.issue11-
dc.identifier.spage7626-
dc.identifier.epage7641-
dc.identifier.eissn1558-2248-
dc.identifier.isiWOS:000716698500047-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats