File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Trajectory optimization for UAV emergency communication with limited user equipment energy: A Safe-DQN approach

TitleTrajectory optimization for UAV emergency communication with limited user equipment energy: A Safe-DQN approach
Authors
KeywordsConstrained Markov decision-making process
deep reinforcement learning
emergency communication
trajectory design
Issue Date2021
Citation
IEEE Transactions on Green Communications and Networking, 2021, v. 5, n. 3, p. 1236-1247 How to Cite?
AbstractIn post-disaster scenarios, it is challenging to provide reliable and flexible emergency communications, especially when the mobile infrastructure is seriously damaged. This article investigates the unmanned aerial vehicle (UAV)-based emergency communication networks, in which UAV is used as a mobile aerial base station for collecting information from ground users in affected areas. Due to the breakdown of ground power system after disasters, the available energy of affected user equipment (UE) is limited. Meanwhile, with the complex geographical conditions after disasters, there are obstacles affecting the flight of UAV. Aiming at maximizing the uplink throughput of UAV networks during the flying time, we formulate the UAV trajectory optimization problem considering UE energy limitation and location of obstacles. Since the constraint on UE energy is dynamic and long-term cumulative, it is hard to be solved directly. We transform the problem into a constrained Markov decision-making process (CMDP) with UAV as agent. To tackle the CMDP, we propose a safe-deep-Q-network (safe-DQN)-based UAV trajectory design algorithm, where the UAV learns to selects the optimal action in reasonable policy sets. Simulation results reveal that: 1) the uplink throughput of the proposed algorithm converges within multiple iterations and 2) compared with the benchmark algorithms, the proposed algorithm performs better in terms of uplink throughput and UE energy efficiency, achieving a good trade-off between UE energy consumption and uplink throughput.
Persistent Identifierhttp://hdl.handle.net/10722/349548

 

DC FieldValueLanguage
dc.contributor.authorZhang, Tiankui-
dc.contributor.authorLei, Jiayi-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorFeng, Chunyan-
dc.contributor.authorNallanathan, Arumugam-
dc.date.accessioned2024-10-17T06:59:16Z-
dc.date.available2024-10-17T06:59:16Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Green Communications and Networking, 2021, v. 5, n. 3, p. 1236-1247-
dc.identifier.urihttp://hdl.handle.net/10722/349548-
dc.description.abstractIn post-disaster scenarios, it is challenging to provide reliable and flexible emergency communications, especially when the mobile infrastructure is seriously damaged. This article investigates the unmanned aerial vehicle (UAV)-based emergency communication networks, in which UAV is used as a mobile aerial base station for collecting information from ground users in affected areas. Due to the breakdown of ground power system after disasters, the available energy of affected user equipment (UE) is limited. Meanwhile, with the complex geographical conditions after disasters, there are obstacles affecting the flight of UAV. Aiming at maximizing the uplink throughput of UAV networks during the flying time, we formulate the UAV trajectory optimization problem considering UE energy limitation and location of obstacles. Since the constraint on UE energy is dynamic and long-term cumulative, it is hard to be solved directly. We transform the problem into a constrained Markov decision-making process (CMDP) with UAV as agent. To tackle the CMDP, we propose a safe-deep-Q-network (safe-DQN)-based UAV trajectory design algorithm, where the UAV learns to selects the optimal action in reasonable policy sets. Simulation results reveal that: 1) the uplink throughput of the proposed algorithm converges within multiple iterations and 2) compared with the benchmark algorithms, the proposed algorithm performs better in terms of uplink throughput and UE energy efficiency, achieving a good trade-off between UE energy consumption and uplink throughput.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Green Communications and Networking-
dc.subjectConstrained Markov decision-making process-
dc.subjectdeep reinforcement learning-
dc.subjectemergency communication-
dc.subjecttrajectory design-
dc.titleTrajectory optimization for UAV emergency communication with limited user equipment energy: A Safe-DQN approach-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TGCN.2021.3068333-
dc.identifier.scopuseid_2-s2.0-85103295702-
dc.identifier.volume5-
dc.identifier.issue3-
dc.identifier.spage1236-
dc.identifier.epage1247-
dc.identifier.eissn2473-2400-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats