File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Pseudo-Plane Regularized Signed Distance Field for Neural Indoor Scene Reconstruction

TitlePseudo-Plane Regularized Signed Distance Field for Neural Indoor Scene Reconstruction
Authors
Keywordsimplicit representation
Neural surface reconstruction
Plane regularized reconstruction
Scene reconstruction
Issue Date1-Jan-2025
PublisherSpringer
Citation
International Journal of Computer Vision, 2025, v. 133, n. 6, p. 3203-3221 How to Cite?
AbstractGiven only a set of images, neural implicit surface representation has shown its capability in 3D surface reconstruction. However, as the nature of per-scene optimization is based on the volumetric rendering of color, previous neural implicit surface reconstruction methods usually fail in the low-textured regions, including floors, walls, etc., which commonly exist for indoor scenes. Being aware of the fact that these low-textured regions usually correspond to planes, without introducing additional ground-truth supervisory signals or making additional assumptions about the room layout, we propose to leverage a novel Pseudo-plane regularized Signed Distance Field (PPlaneSDF) for indoor scene reconstruction. Specifically, we consider adjacent pixels with similar colors to be on the same pseudo-planes. The plane parameters are then estimated on the fly during training by an efficient and effective two-step scheme. Then the signed distances of the points on the planes are regularized by the estimated plane parameters in the training phase. As the unsupervised plane segments are usually noisy and inaccurate, we propose to assign different weights to the sampled points on the plane in plane estimation as well as the regularization loss. The weights come by fusing the plane segments from different views. As the sampled rays in the planar regions are redundant, leading to inefficient training, we further propose a keypoint-guided rays sampling strategy that attends to the informative textured regions with large color variations, and the implicit network gets a better reconstruction, compared with the original uniform ray sampling strategy. Experiments show that our PPlaneSDF achieves competitive reconstruction performance in Manhattan scenes. Further, as we do not introduce any additional room layout assumption, our PPlaneSDF generalizes well to the reconstruction of non-Manhattan scenes.
Persistent Identifierhttp://hdl.handle.net/10722/362698
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668

 

DC FieldValueLanguage
dc.contributor.authorLi, Jing-
dc.contributor.authorYu, Jinpeng-
dc.contributor.authorWang, Ruoyu-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2025-09-26T00:37:02Z-
dc.date.available2025-09-26T00:37:02Z-
dc.date.issued2025-01-01-
dc.identifier.citationInternational Journal of Computer Vision, 2025, v. 133, n. 6, p. 3203-3221-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/362698-
dc.description.abstractGiven only a set of images, neural implicit surface representation has shown its capability in 3D surface reconstruction. However, as the nature of per-scene optimization is based on the volumetric rendering of color, previous neural implicit surface reconstruction methods usually fail in the low-textured regions, including floors, walls, etc., which commonly exist for indoor scenes. Being aware of the fact that these low-textured regions usually correspond to planes, without introducing additional ground-truth supervisory signals or making additional assumptions about the room layout, we propose to leverage a novel Pseudo-plane regularized Signed Distance Field (PPlaneSDF) for indoor scene reconstruction. Specifically, we consider adjacent pixels with similar colors to be on the same pseudo-planes. The plane parameters are then estimated on the fly during training by an efficient and effective two-step scheme. Then the signed distances of the points on the planes are regularized by the estimated plane parameters in the training phase. As the unsupervised plane segments are usually noisy and inaccurate, we propose to assign different weights to the sampled points on the plane in plane estimation as well as the regularization loss. The weights come by fusing the plane segments from different views. As the sampled rays in the planar regions are redundant, leading to inefficient training, we further propose a keypoint-guided rays sampling strategy that attends to the informative textured regions with large color variations, and the implicit network gets a better reconstruction, compared with the original uniform ray sampling strategy. Experiments show that our PPlaneSDF achieves competitive reconstruction performance in Manhattan scenes. Further, as we do not introduce any additional room layout assumption, our PPlaneSDF generalizes well to the reconstruction of non-Manhattan scenes.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectimplicit representation-
dc.subjectNeural surface reconstruction-
dc.subjectPlane regularized reconstruction-
dc.subjectScene reconstruction-
dc.titlePseudo-Plane Regularized Signed Distance Field for Neural Indoor Scene Reconstruction-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-024-02319-w-
dc.identifier.scopuseid_2-s2.0-85213697110-
dc.identifier.volume133-
dc.identifier.issue6-
dc.identifier.spage3203-
dc.identifier.epage3221-
dc.identifier.eissn1573-1405-
dc.identifier.issnl0920-5691-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats