File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: SLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency

TitleSLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency
Authors
KeywordsControllable generation
Latent diffusion model
Mesh generation
Texture generation
Issue Date1-Jun-2025
PublisherSpringer
Citation
International Journal of Computer Vision, 2025, v. 133, p. 3105-3128 How to Cite?
Abstract

The generation of textured mesh is crucial for computer graphics and virtual content creation. However, current generative models often struggle with challenges such as irregular mesh structures and inconsistencies in multi-view textures. In this study, we present a unified framework for both geometry generation and texture generation, utilizing a novel sparse latent point diffusion model that specifically addresses the geometric aspects of models. Our approach employs point clouds as an efficient intermediate representation, encoding them into sparse latent points with semantically meaningful features for precise geometric control. While the sparse latent points facilitate a high-level control over the geometry, shaping the overall structure and fine details of the meshes, this control does not extend to textures. To address this, we propose a separate texture generation process that integrates multi-view priors post-geometry generation, effectively resolving the issue of multi-view texture inconsistency. This process ensures the production of coherent and high-quality textures that complement the precisely generated meshes, thereby creating visually appealing and detailed models. Our framework distinctively separates the control mechanisms for geometry and texture, leading to significant improvements in the generation of complex, textured 3D content. Evaluations on the ShapeNet dataset for geometry and the Objaverse dataset for textures demonstrate that our model surpasses existing methods in terms of geometric quality, control, and the generation of coherent, high-quality textures.


Persistent Identifierhttp://hdl.handle.net/10722/358386
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668

 

DC FieldValueLanguage
dc.contributor.authorWang, Jinyi-
dc.contributor.authorLyu, Zhaoyang-
dc.contributor.authorFei, Ben-
dc.contributor.authorYao, Jiangchao-
dc.contributor.authorZhang, Ya-
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.contributor.authorHe, Ying-
dc.contributor.authorWang, Yanfeng-
dc.date.accessioned2025-08-07T00:31:55Z-
dc.date.available2025-08-07T00:31:55Z-
dc.date.issued2025-06-01-
dc.identifier.citationInternational Journal of Computer Vision, 2025, v. 133, p. 3105-3128-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/358386-
dc.description.abstract<p>The generation of textured mesh is crucial for computer graphics and virtual content creation. However, current generative models often struggle with challenges such as irregular mesh structures and inconsistencies in multi-view textures. In this study, we present a unified framework for both geometry generation and texture generation, utilizing a novel sparse latent point diffusion model that specifically addresses the geometric aspects of models. Our approach employs point clouds as an efficient intermediate representation, encoding them into sparse latent points with semantically meaningful features for precise geometric control. While the sparse latent points facilitate a high-level control over the geometry, shaping the overall structure and fine details of the meshes, this control does not extend to textures. To address this, we propose a separate texture generation process that integrates multi-view priors post-geometry generation, effectively resolving the issue of multi-view texture inconsistency. This process ensures the production of coherent and high-quality textures that complement the precisely generated meshes, thereby creating visually appealing and detailed models. Our framework distinctively separates the control mechanisms for geometry and texture, leading to significant improvements in the generation of complex, textured 3D content. Evaluations on the ShapeNet dataset for geometry and the Objaverse dataset for textures demonstrate that our model surpasses existing methods in terms of geometric quality, control, and the generation of coherent, high-quality textures.</p>-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectControllable generation-
dc.subjectLatent diffusion model-
dc.subjectMesh generation-
dc.subjectTexture generation-
dc.titleSLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-024-02326-x-
dc.identifier.scopuseid_2-s2.0-85212835634-
dc.identifier.volume133-
dc.identifier.spage3105-
dc.identifier.epage3128-
dc.identifier.eissn1573-1405-
dc.identifier.issnl0920-5691-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats