File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: AI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?

TitleAI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?
Authors
KeywordsDeep learning (DL)
non-orthogonal multiple access (NOMA)
reconfigurable intelligent surfaces (RIS)
reinforcement learning (RL)
Issue Date2022
Citation
IEEE Journal on Selected Areas in Communications, 2022, v. 40, n. 1, p. 182-196 How to Cite?
AbstractA reconfigurable intelligent surface (RIS)-assisted multi-user downlink communication system over fading channels is investigated, where both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes are employed. In particular, the time overhead for configuring the RIS reflective elements at the beginning of each fading channel is considered. The optimization goal is maximizing the effective throughput of the entire transmission period by jointly optimizing the phase shift of the RIS and the power allocation of the AP for each channel block. In an effort to solve the formulated problem and fill the research vacancy of the performance comparison between different machine learning tools in wireless networks, a deep learning (DL) approach and a reinforcement learning (RL) approach are proposed and their representative superiority and inferiority are investigated. The DL approach can locate the optimal phase shifts with the deep neural network fitting as well as the corresponding power allocation for each user. From the perspective of long-term reward, the phase shift control with configuration overhead can be regarded as a Markov decision process and the RL algorithm is proficient in solving such problems with the assistance of the Bellman equation. The numerical results indicate that: 1) From the perspective of the wireless network, NOMA can achieve a throughput gain of about 42% compared with OMA; 2) The well-trained RL and DL agents are able to achieve the same performance in Rician channel, while RL is superior in the Rayleigh channel; 3) The DL approach has lower complexity and faster convergence, while the RL approach has preferable strategy flexibility.
Persistent Identifierhttp://hdl.handle.net/10722/349648
ISSN
2023 Impact Factor: 13.8
2023 SCImago Journal Rankings: 8.707

 

DC FieldValueLanguage
dc.contributor.authorZhong, Ruikang-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorMu, Xidong-
dc.contributor.authorChen, Yue-
dc.contributor.authorSong, Lingyang-
dc.date.accessioned2024-10-17T06:59:56Z-
dc.date.available2024-10-17T06:59:56Z-
dc.date.issued2022-
dc.identifier.citationIEEE Journal on Selected Areas in Communications, 2022, v. 40, n. 1, p. 182-196-
dc.identifier.issn0733-8716-
dc.identifier.urihttp://hdl.handle.net/10722/349648-
dc.description.abstractA reconfigurable intelligent surface (RIS)-assisted multi-user downlink communication system over fading channels is investigated, where both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes are employed. In particular, the time overhead for configuring the RIS reflective elements at the beginning of each fading channel is considered. The optimization goal is maximizing the effective throughput of the entire transmission period by jointly optimizing the phase shift of the RIS and the power allocation of the AP for each channel block. In an effort to solve the formulated problem and fill the research vacancy of the performance comparison between different machine learning tools in wireless networks, a deep learning (DL) approach and a reinforcement learning (RL) approach are proposed and their representative superiority and inferiority are investigated. The DL approach can locate the optimal phase shifts with the deep neural network fitting as well as the corresponding power allocation for each user. From the perspective of long-term reward, the phase shift control with configuration overhead can be regarded as a Markov decision process and the RL algorithm is proficient in solving such problems with the assistance of the Bellman equation. The numerical results indicate that: 1) From the perspective of the wireless network, NOMA can achieve a throughput gain of about 42% compared with OMA; 2) The well-trained RL and DL agents are able to achieve the same performance in Rician channel, while RL is superior in the Rayleigh channel; 3) The DL approach has lower complexity and faster convergence, while the RL approach has preferable strategy flexibility.-
dc.languageeng-
dc.relation.ispartofIEEE Journal on Selected Areas in Communications-
dc.subjectDeep learning (DL)-
dc.subjectnon-orthogonal multiple access (NOMA)-
dc.subjectreconfigurable intelligent surfaces (RIS)-
dc.subjectreinforcement learning (RL)-
dc.titleAI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/JSAC.2021.3126068-
dc.identifier.scopuseid_2-s2.0-85120574033-
dc.identifier.volume40-
dc.identifier.issue1-
dc.identifier.spage182-
dc.identifier.epage196-
dc.identifier.eissn1558-0008-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats