File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions

TitleDeep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions
Authors
Issue Date2024
Citation
Transactions on Machine Learning Research, 2024, v. 2024 How to Cite?
AbstractDeep generative models (DGMs) have demonstrated great success across various domains, particularly in generating texts and images using models trained from offline data. Similarly, data-driven decision-making also necessitates learning a generator function from the offline data to serve as the policy. Applying DGMs in offline policy learning exhibits great potential, and numerous studies have explored in this direction. However, this field still lacks a comprehensive review and so developments of different branches are relatively independent. In this paper, we provide the first systematic review on the applications of DGMs for offline policy learning. We cover five mainstream DGMs, including Variational Auto-Encoders, Generative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion Models, and their applications in both offline reinforcement learning (offline RL) and imitation learning (IL). Offline RL and IL are two main branches of offline policy learning and are widely-adopted techniques for sequential decision-making. Notably, for each type of DGM-based offline policy learning, we distill its fundamental scheme, categorize related works based on the usage of the DGM, and sort out the development process of algorithms in that field. In addition, we provide in-depth discussions on DGMs and offline policy learning as a summary, based on which we present our perspectives on future research directions. This work offers a hands-on reference for the research progress in DGMs for offline policy learning, and aims to inspire improved DGM-based offline RL or IL algorithms. For convenience, we maintain a paper list on.
Persistent Identifierhttp://hdl.handle.net/10722/360890

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiayu-
dc.contributor.authorGanguly, Bhargav-
dc.contributor.authorXu, Yang-
dc.contributor.authorMei, Yongsheng-
dc.contributor.authorLan, Tian-
dc.contributor.authorAggarwal, Vaneet-
dc.date.accessioned2025-09-16T04:13:14Z-
dc.date.available2025-09-16T04:13:14Z-
dc.date.issued2024-
dc.identifier.citationTransactions on Machine Learning Research, 2024, v. 2024-
dc.identifier.urihttp://hdl.handle.net/10722/360890-
dc.description.abstractDeep generative models (DGMs) have demonstrated great success across various domains, particularly in generating texts and images using models trained from offline data. Similarly, data-driven decision-making also necessitates learning a generator function from the offline data to serve as the policy. Applying DGMs in offline policy learning exhibits great potential, and numerous studies have explored in this direction. However, this field still lacks a comprehensive review and so developments of different branches are relatively independent. In this paper, we provide the first systematic review on the applications of DGMs for offline policy learning. We cover five mainstream DGMs, including Variational Auto-Encoders, Generative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion Models, and their applications in both offline reinforcement learning (offline RL) and imitation learning (IL). Offline RL and IL are two main branches of offline policy learning and are widely-adopted techniques for sequential decision-making. Notably, for each type of DGM-based offline policy learning, we distill its fundamental scheme, categorize related works based on the usage of the DGM, and sort out the development process of algorithms in that field. In addition, we provide in-depth discussions on DGMs and offline policy learning as a summary, based on which we present our perspectives on future research directions. This work offers a hands-on reference for the research progress in DGMs for offline policy learning, and aims to inspire improved DGM-based offline RL or IL algorithms. For convenience, we maintain a paper list on.-
dc.languageeng-
dc.relation.ispartofTransactions on Machine Learning Research-
dc.titleDeep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85206103213-
dc.identifier.volume2024-
dc.identifier.eissn2835-8856-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats