File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Don’t touch what matters: Task-aware lipschitz data augmentationfor visual reinforcement learning

TitleDon’t touch what matters: Task-aware lipschitz data augmentationfor visual reinforcement learning
Authors
KeywordsDeep reinforcement learning
Reinforcement learning
Learning in robotics
Issue Date2022
PublisherInternational Joint Conferences on Artificial Intelligence.
Citation
The 31st International Joint Conference on Artificial Intelligence (IJCAI), Vienna, Austria, July 23-29, 2022. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, 23-29 July 2022 How to Cite?
AbstractOne of the key challenges in visual Reinforcement Learning (RL) is to learn policies that can generalize to unseen environments. Recently, data augmentation techniques aiming at enhancing data diversity have demonstrated proven performance in improving the generalization ability of learned policies. However, due to the sensitivity of RL training, naively applying data augmentation, which transforms each pixel in a task-agnostic manner, may suffer from instability and damage the sample efficiency, thus further exacerbating the generalization performance. At the heart of this phenomenon is the diverged action distribution and high-variance value estimation in the face of augmented images. To alleviate this issue, we propose Task-aware Lipschitz Data Augmentation (TLDA) for visual RL, which explicitly identifies the task-correlated pixels with large Lipschitz constants, and only augments the task-irrelevant pixels for stability. We verify the effectiveness of our approach on DeepMind Control suite, CARLA and DeepMind Manipulation tasks. The extensive empirical results show that TLDA improves both sample efficiency and generalization; it outperforms previous state-of-the-art methods across 3 different visual control benchmarks.
DescriptionSponsored by International Joint Conferences on Artificial Intelligence (IJCAI); Oral
Persistent Identifierhttp://hdl.handle.net/10722/315554

 

DC FieldValueLanguage
dc.contributor.authorYuan, Z-
dc.contributor.authorMa, G-
dc.contributor.authorMu, Y-
dc.contributor.authorXia, B-
dc.contributor.authorYuan, B-
dc.contributor.authorWang, X-
dc.contributor.authorLuo, P-
dc.contributor.authorXu, H-
dc.date.accessioned2022-08-19T09:00:03Z-
dc.date.available2022-08-19T09:00:03Z-
dc.date.issued2022-
dc.identifier.citationThe 31st International Joint Conference on Artificial Intelligence (IJCAI), Vienna, Austria, July 23-29, 2022. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, 23-29 July 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315554-
dc.descriptionSponsored by International Joint Conferences on Artificial Intelligence (IJCAI); Oral-
dc.description.abstractOne of the key challenges in visual Reinforcement Learning (RL) is to learn policies that can generalize to unseen environments. Recently, data augmentation techniques aiming at enhancing data diversity have demonstrated proven performance in improving the generalization ability of learned policies. However, due to the sensitivity of RL training, naively applying data augmentation, which transforms each pixel in a task-agnostic manner, may suffer from instability and damage the sample efficiency, thus further exacerbating the generalization performance. At the heart of this phenomenon is the diverged action distribution and high-variance value estimation in the face of augmented images. To alleviate this issue, we propose Task-aware Lipschitz Data Augmentation (TLDA) for visual RL, which explicitly identifies the task-correlated pixels with large Lipschitz constants, and only augments the task-irrelevant pixels for stability. We verify the effectiveness of our approach on DeepMind Control suite, CARLA and DeepMind Manipulation tasks. The extensive empirical results show that TLDA improves both sample efficiency and generalization; it outperforms previous state-of-the-art methods across 3 different visual control benchmarks.-
dc.languageeng-
dc.publisherInternational Joint Conferences on Artificial Intelligence.-
dc.relation.ispartofProceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, 23-29 July 2022-
dc.subjectDeep reinforcement learning-
dc.subjectReinforcement learning-
dc.subjectLearning in robotics-
dc.titleDon’t touch what matters: Task-aware lipschitz data augmentationfor visual reinforcement learning-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.24963/ijcai.2022/514-
dc.identifier.hkuros335586-
dc.publisher.placeAustria-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats