File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning

TitleTowards optimally decentralized multi-robot collision avoidance via deep reinforcement learning
Authors
Issue Date2018
Citation
2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21-25 May 2018. In Conference Proceedings, 2018, p. 6252-6259 How to Cite?
AbstractDeveloping a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generates its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to learn an optimal policy. The policy is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmaca.
Persistent Identifierhttp://hdl.handle.net/10722/308782
ISSN
2023 SCImago Journal Rankings: 1.620
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLong, Pinxin-
dc.contributor.authorFanl, Tingxiang-
dc.contributor.authorLiao, Xinyi-
dc.contributor.authorLiu, Wenxi-
dc.contributor.authorZhang, Hao-
dc.contributor.authorPan, Jia-
dc.date.accessioned2021-12-08T07:50:07Z-
dc.date.available2021-12-08T07:50:07Z-
dc.date.issued2018-
dc.identifier.citation2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21-25 May 2018. In Conference Proceedings, 2018, p. 6252-6259-
dc.identifier.issn1050-4729-
dc.identifier.urihttp://hdl.handle.net/10722/308782-
dc.description.abstractDeveloping a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generates its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to learn an optimal policy. The policy is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmaca.-
dc.languageeng-
dc.relation.ispartof2018 IEEE International Conference on Robotics and Automation (ICRA)-
dc.titleTowards optimally decentralized multi-robot collision avoidance via deep reinforcement learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICRA.2018.8461113-
dc.identifier.scopuseid_2-s2.0-85063146938-
dc.identifier.spage6252-
dc.identifier.epage6259-
dc.identifier.isiWOS:000446394504106-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats