File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multi-UAV Dynamic Wireless Networking with Deep Reinforcement Learning

TitleMulti-UAV Dynamic Wireless Networking with Deep Reinforcement Learning
Authors
KeywordsCapacity
deep reinforcement learning
movement
unmanned aerial vehicles
Issue Date2019
Citation
IEEE Communications Letters, 2019, v. 23, n. 12, p. 2243-2246 How to Cite?
AbstractThis letter investigates a novel unmanned aerial vehicle (UAV)-enabled wireless communication system, where multiple UAVs transmit information to multiple ground terminals (GTs). We study how the UAVs can optimally employ their mobility to maximize the real-time downlink capacity while covering all GTs. The system capacity is characterized, by optimizing the UAV locations subject to the coverage constraint. We formula the UAV movement problem as a Constrained Markov Decision Process (CMDP) problem and employ Q-learning to solve the UAV movement problem. Since the state of the UAV movement problem has large dimensions, we propose Dueling Deep Q-network (DDQN) algorithm which introduces neural networks and dueling structure into Q-learning. Simulation results demonstrate the proposed movement algorithm is able to track the movement of GTs and obtains real-time optimal capacity, subject to coverage constraint.
Persistent Identifierhttp://hdl.handle.net/10722/349378
ISSN
2023 Impact Factor: 3.7
2023 SCImago Journal Rankings: 1.887
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Qiang-
dc.contributor.authorZhang, Wenqi-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorLiu, Ying-
dc.date.accessioned2024-10-17T06:58:08Z-
dc.date.available2024-10-17T06:58:08Z-
dc.date.issued2019-
dc.identifier.citationIEEE Communications Letters, 2019, v. 23, n. 12, p. 2243-2246-
dc.identifier.issn1089-7798-
dc.identifier.urihttp://hdl.handle.net/10722/349378-
dc.description.abstractThis letter investigates a novel unmanned aerial vehicle (UAV)-enabled wireless communication system, where multiple UAVs transmit information to multiple ground terminals (GTs). We study how the UAVs can optimally employ their mobility to maximize the real-time downlink capacity while covering all GTs. The system capacity is characterized, by optimizing the UAV locations subject to the coverage constraint. We formula the UAV movement problem as a Constrained Markov Decision Process (CMDP) problem and employ Q-learning to solve the UAV movement problem. Since the state of the UAV movement problem has large dimensions, we propose Dueling Deep Q-network (DDQN) algorithm which introduces neural networks and dueling structure into Q-learning. Simulation results demonstrate the proposed movement algorithm is able to track the movement of GTs and obtains real-time optimal capacity, subject to coverage constraint.-
dc.languageeng-
dc.relation.ispartofIEEE Communications Letters-
dc.subjectCapacity-
dc.subjectdeep reinforcement learning-
dc.subjectmovement-
dc.subjectunmanned aerial vehicles-
dc.titleMulti-UAV Dynamic Wireless Networking with Deep Reinforcement Learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/LCOMM.2019.2940191-
dc.identifier.scopuseid_2-s2.0-85076677339-
dc.identifier.volume23-
dc.identifier.issue12-
dc.identifier.spage2243-
dc.identifier.epage2246-
dc.identifier.eissn1558-2558-
dc.identifier.isiWOS:000502784300022-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats