File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s11263-024-02326-x
- Scopus: eid_2-s2.0-85212835634
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: SLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency
| Title | SLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency |
|---|---|
| Authors | |
| Keywords | Controllable generation Latent diffusion model Mesh generation Texture generation |
| Issue Date | 1-Jun-2025 |
| Publisher | Springer |
| Citation | International Journal of Computer Vision, 2025, v. 133, p. 3105-3128 How to Cite? |
| Abstract | The generation of textured mesh is crucial for computer graphics and virtual content creation. However, current generative models often struggle with challenges such as irregular mesh structures and inconsistencies in multi-view textures. In this study, we present a unified framework for both geometry generation and texture generation, utilizing a novel sparse latent point diffusion model that specifically addresses the geometric aspects of models. Our approach employs point clouds as an efficient intermediate representation, encoding them into sparse latent points with semantically meaningful features for precise geometric control. While the sparse latent points facilitate a high-level control over the geometry, shaping the overall structure and fine details of the meshes, this control does not extend to textures. To address this, we propose a separate texture generation process that integrates multi-view priors post-geometry generation, effectively resolving the issue of multi-view texture inconsistency. This process ensures the production of coherent and high-quality textures that complement the precisely generated meshes, thereby creating visually appealing and detailed models. Our framework distinctively separates the control mechanisms for geometry and texture, leading to significant improvements in the generation of complex, textured 3D content. Evaluations on the ShapeNet dataset for geometry and the Objaverse dataset for textures demonstrate that our model surpasses existing methods in terms of geometric quality, control, and the generation of coherent, high-quality textures. |
| Persistent Identifier | http://hdl.handle.net/10722/358386 |
| ISSN | 2023 Impact Factor: 11.6 2023 SCImago Journal Rankings: 6.668 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wang, Jinyi | - |
| dc.contributor.author | Lyu, Zhaoyang | - |
| dc.contributor.author | Fei, Ben | - |
| dc.contributor.author | Yao, Jiangchao | - |
| dc.contributor.author | Zhang, Ya | - |
| dc.contributor.author | Dai, Bo | - |
| dc.contributor.author | Lin, Dahua | - |
| dc.contributor.author | He, Ying | - |
| dc.contributor.author | Wang, Yanfeng | - |
| dc.date.accessioned | 2025-08-07T00:31:55Z | - |
| dc.date.available | 2025-08-07T00:31:55Z | - |
| dc.date.issued | 2025-06-01 | - |
| dc.identifier.citation | International Journal of Computer Vision, 2025, v. 133, p. 3105-3128 | - |
| dc.identifier.issn | 0920-5691 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/358386 | - |
| dc.description.abstract | <p>The generation of textured mesh is crucial for computer graphics and virtual content creation. However, current generative models often struggle with challenges such as irregular mesh structures and inconsistencies in multi-view textures. In this study, we present a unified framework for both geometry generation and texture generation, utilizing a novel sparse latent point diffusion model that specifically addresses the geometric aspects of models. Our approach employs point clouds as an efficient intermediate representation, encoding them into sparse latent points with semantically meaningful features for precise geometric control. While the sparse latent points facilitate a high-level control over the geometry, shaping the overall structure and fine details of the meshes, this control does not extend to textures. To address this, we propose a separate texture generation process that integrates multi-view priors post-geometry generation, effectively resolving the issue of multi-view texture inconsistency. This process ensures the production of coherent and high-quality textures that complement the precisely generated meshes, thereby creating visually appealing and detailed models. Our framework distinctively separates the control mechanisms for geometry and texture, leading to significant improvements in the generation of complex, textured 3D content. Evaluations on the ShapeNet dataset for geometry and the Objaverse dataset for textures demonstrate that our model surpasses existing methods in terms of geometric quality, control, and the generation of coherent, high-quality textures.</p> | - |
| dc.language | eng | - |
| dc.publisher | Springer | - |
| dc.relation.ispartof | International Journal of Computer Vision | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Controllable generation | - |
| dc.subject | Latent diffusion model | - |
| dc.subject | Mesh generation | - |
| dc.subject | Texture generation | - |
| dc.title | SLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-view Consistency | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1007/s11263-024-02326-x | - |
| dc.identifier.scopus | eid_2-s2.0-85212835634 | - |
| dc.identifier.volume | 133 | - |
| dc.identifier.spage | 3105 | - |
| dc.identifier.epage | 3128 | - |
| dc.identifier.eissn | 1573-1405 | - |
| dc.identifier.issnl | 0920-5691 | - |
