File Download
  Links for fulltext
     (May Require Subscription)

Article: DreamComposer++: Empowering Diffusion Models with Multi-View Conditions for 3D Content Generation

TitleDreamComposer++: Empowering Diffusion Models with Multi-View Conditions for 3D Content Generation
Authors
Keywords3D Controllable Generation
Diffusion Model
Novel View Synthesis
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025 How to Cite?
Abstract

Recent advancements in leveraging pre-trained 2D diffusion models achieve the generation of high-quality novel views from a single in-the-wild image. However, existing works face challenges in producing controllable novel views due to the lack of information from multiple views. In this paper, we present DreamComposer++, a flexible and scalable framework designed to improve current view-aware diffusion models by incorporating multi-view conditions. Specifically, DreamComposer++ utilizes a view-aware 3D lifting module to extract 3D representations of an object from various views. These representations are then aggregated and rendered into the latent features of target view through the multi-view feature fusion module. Finally, the obtained features of target view are integrated into pre-trained image or video diffusion models for novel view synthesis. Experimental results demonstrate that DreamComposer++ seamlessly integrates with cutting-edge view-aware diffusion models and enhances their abilities to generate controllable novel views from multi-view conditions. This advancement facilitates controllable 3D object reconstruction and enables a wide range of applications.


Persistent Identifierhttp://hdl.handle.net/10722/359177
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorYang, Yunhan-
dc.contributor.authorChen, Shuo-
dc.contributor.authorHuang, Yukun-
dc.contributor.authorWu, Xiaoyang-
dc.contributor.authorGuo, Yuan Chen-
dc.contributor.authorLam, Edmund Y.-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorHe, Tong-
dc.contributor.authorLiu, Xihui-
dc.date.accessioned2025-08-23T00:30:26Z-
dc.date.available2025-08-23T00:30:26Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2025-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/359177-
dc.description.abstract<p>Recent advancements in leveraging pre-trained 2D diffusion models achieve the generation of high-quality novel views from a single in-the-wild image. However, existing works face challenges in producing controllable novel views due to the lack of information from multiple views. In this paper, we present DreamComposer++, a flexible and scalable framework designed to improve current view-aware diffusion models by incorporating multi-view conditions. Specifically, DreamComposer++ utilizes a view-aware 3D lifting module to extract 3D representations of an object from various views. These representations are then aggregated and rendered into the latent features of target view through the multi-view feature fusion module. Finally, the obtained features of target view are integrated into pre-trained image or video diffusion models for novel view synthesis. Experimental results demonstrate that DreamComposer++ seamlessly integrates with cutting-edge view-aware diffusion models and enhances their abilities to generate controllable novel views from multi-view conditions. This advancement facilitates controllable 3D object reconstruction and enables a wide range of applications.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject3D Controllable Generation-
dc.subjectDiffusion Model-
dc.subjectNovel View Synthesis-
dc.titleDreamComposer++: Empowering Diffusion Models with Multi-View Conditions for 3D Content Generation-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/TPAMI.2025.3568190-
dc.identifier.scopuseid_2-s2.0-105004906552-
dc.identifier.eissn1939-3539-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats