File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion

TitleEpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
Authors
Keywords3D generation
Image-to-3D
Multiview generation
Issue Date2024
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2024, p. 9784-9794 How to Cite?
AbstractGenerating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.
Persistent Identifierhttp://hdl.handle.net/10722/352428
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorHuang, Zehuan-
dc.contributor.authorWen, Hao-
dc.contributor.authorDong, Junting-
dc.contributor.authorWang, Yaohui-
dc.contributor.authorLi, Yangguang-
dc.contributor.authorChen, Xinyuan-
dc.contributor.authorCao, Yan Pei-
dc.contributor.authorLiang, Ding-
dc.contributor.authorQiao, Yu-
dc.contributor.authorDai, Bo-
dc.contributor.authorSheng, Lu-
dc.date.accessioned2024-12-16T03:58:53Z-
dc.date.available2024-12-16T03:58:53Z-
dc.date.issued2024-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2024, p. 9784-9794-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352428-
dc.description.abstractGenerating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subject3D generation-
dc.subjectImage-to-3D-
dc.subjectMultiview generation-
dc.titleEpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52733.2024.00934-
dc.identifier.scopuseid_2-s2.0-85191044048-
dc.identifier.spage9784-
dc.identifier.epage9794-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats