File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Path Design and Resource Management for NOMA Enhanced Indoor Intelligent Robots

TitlePath Design and Resource Management for NOMA Enhanced Indoor Intelligent Robots
Authors
KeywordsIndoor path design
intelligent robot
non-orthogonal multiple access
radio map
reinforcement learning
Issue Date2022
Citation
IEEE Transactions on Wireless Communications, 2022, v. 21, n. 10, p. 8007-8021 How to Cite?
AbstractA communication enabled indoor intelligent robots (IRs) service framework is proposed, where non-orthogonal multiple access (NOMA) technique is adopted to enable highly reliable communications. In cooperation with the ultramodern indoor channel model recently proposed by the International Telecommunication Union (ITU), the Lego modeling method is proposed, which can deterministically describe the indoor layout and channel state in order to construct the radio map. The investigated radio map is invoked as a virtual environment to train the reinforcement learning agent, which can save training time and hardware costs. Build on the proposed communication model, motions of IRs who need to reach designated mission destinations and their corresponding down-link power allocation policy are jointly optimized to maximize the mission efficiency and communication reliability of IRs. In an effort to solve this optimization problem, a novel reinforcement learning approach named deep transfer deterministic policy gradient (DT-DPG) algorithm is proposed. Our simulation results demonstrate in the following: 1) with the aid of NOMA techniques, the communication reliability of IRs is effectively improved; 2) radio map is qualified to be a virtual training environment, and its statistical channel state information improves training efficiency by about 30%; 3) proposed DT-DPG algorithm is superior to the conventional deep deterministic policy gradient (DDPG) algorithm in terms of optimization performance, training time, and anti-local optimum ability.
Persistent Identifierhttp://hdl.handle.net/10722/349708
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371

 

DC FieldValueLanguage
dc.contributor.authorZhong, Ruikang-
dc.contributor.authorLiu, Xiao-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorWang, Xianbin-
dc.date.accessioned2024-10-17T07:00:17Z-
dc.date.available2024-10-17T07:00:17Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2022, v. 21, n. 10, p. 8007-8021-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/349708-
dc.description.abstractA communication enabled indoor intelligent robots (IRs) service framework is proposed, where non-orthogonal multiple access (NOMA) technique is adopted to enable highly reliable communications. In cooperation with the ultramodern indoor channel model recently proposed by the International Telecommunication Union (ITU), the Lego modeling method is proposed, which can deterministically describe the indoor layout and channel state in order to construct the radio map. The investigated radio map is invoked as a virtual environment to train the reinforcement learning agent, which can save training time and hardware costs. Build on the proposed communication model, motions of IRs who need to reach designated mission destinations and their corresponding down-link power allocation policy are jointly optimized to maximize the mission efficiency and communication reliability of IRs. In an effort to solve this optimization problem, a novel reinforcement learning approach named deep transfer deterministic policy gradient (DT-DPG) algorithm is proposed. Our simulation results demonstrate in the following: 1) with the aid of NOMA techniques, the communication reliability of IRs is effectively improved; 2) radio map is qualified to be a virtual training environment, and its statistical channel state information improves training efficiency by about 30%; 3) proposed DT-DPG algorithm is superior to the conventional deep deterministic policy gradient (DDPG) algorithm in terms of optimization performance, training time, and anti-local optimum ability.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.subjectIndoor path design-
dc.subjectintelligent robot-
dc.subjectnon-orthogonal multiple access-
dc.subjectradio map-
dc.subjectreinforcement learning-
dc.titlePath Design and Resource Management for NOMA Enhanced Indoor Intelligent Robots-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TWC.2022.3163422-
dc.identifier.scopuseid_2-s2.0-85127732388-
dc.identifier.volume21-
dc.identifier.issue10-
dc.identifier.spage8007-
dc.identifier.epage8021-
dc.identifier.eissn1558-2248-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats