File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: DebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction

TitleDebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction
Authors
KeywordsBias-aware SDF to Density Transformation
Geometry
Image reconstruction
Implicit Representation
Indoor Scenes Reconstruction
Multi-view Reconstruction
Optimization
Rendering (computer graphics)
Surface reconstruction
Three-dimensional displays
Uncertainty
Uncertainty Learning
Issue Date2024
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024 How to Cite?
AbstractIn recent years, the neural implicit surface has emerged as a powerful representation for multi-view surface reconstruction due to its simplicity and state-of-the-art performance. However, reconstructing smooth and detailed surfaces in indoor scenes from multi-view images presents unique challenges. Indoor scenes typically contain large texture-less regions, making the photometric loss unreliable for optimizing the implicit surface. Previous work utilizes monocular geometry priors to improve the reconstruction in indoor scenes. However, monocular priors often contain substantial errors in thin structure regions due to domain gaps and the inherent inconsistencies when derived independently from different views. This paper presents DebSDF to address these challenges, focusing on the utilization of uncertainty in monocular priors and the bias in SDF-based volume rendering. We propose an uncertainty modeling technique that associates larger uncertainties with larger errors in the monocular priors. High-uncertainty priors are then excluded from optimization to prevent bias. This uncertainty measure also informs an importance-guided ray sampling and adaptive smoothness regularization, enhancing the learning of fine structures. We further introduce a bias-aware signed distance function to density transformation that takes into account the curvature and the angle between the view direction and the SDF normals to reconstruct fine details better. Our approach has been validated through extensive experiments on several challenging datasets, demonstrating improved qualitative and quantitative results in reconstructing thin structures in indoor scenes, thereby outperforming previous work.
Persistent Identifierhttp://hdl.handle.net/10722/345389
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorXiao, Yuting-
dc.contributor.authorXu, Jingwei-
dc.contributor.authorYu, Zehao-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:27:02Z-
dc.date.available2024-08-15T09:27:02Z-
dc.date.issued2024-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2024-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/345389-
dc.description.abstractIn recent years, the neural implicit surface has emerged as a powerful representation for multi-view surface reconstruction due to its simplicity and state-of-the-art performance. However, reconstructing smooth and detailed surfaces in indoor scenes from multi-view images presents unique challenges. Indoor scenes typically contain large texture-less regions, making the photometric loss unreliable for optimizing the implicit surface. Previous work utilizes monocular geometry priors to improve the reconstruction in indoor scenes. However, monocular priors often contain substantial errors in thin structure regions due to domain gaps and the inherent inconsistencies when derived independently from different views. This paper presents <bold>DebSDF</bold> to address these challenges, focusing on the utilization of uncertainty in monocular priors and the bias in SDF-based volume rendering. We propose an uncertainty modeling technique that associates larger uncertainties with larger errors in the monocular priors. High-uncertainty priors are then excluded from optimization to prevent bias. This uncertainty measure also informs an importance-guided ray sampling and adaptive smoothness regularization, enhancing the learning of fine structures. We further introduce a bias-aware signed distance function to density transformation that takes into account the curvature and the angle between the view direction and the SDF normals to reconstruct fine details better. Our approach has been validated through extensive experiments on several challenging datasets, demonstrating improved qualitative and quantitative results in reconstructing thin structures in indoor scenes, thereby outperforming previous work.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectBias-aware SDF to Density Transformation-
dc.subjectGeometry-
dc.subjectImage reconstruction-
dc.subjectImplicit Representation-
dc.subjectIndoor Scenes Reconstruction-
dc.subjectMulti-view Reconstruction-
dc.subjectOptimization-
dc.subjectRendering (computer graphics)-
dc.subjectSurface reconstruction-
dc.subjectThree-dimensional displays-
dc.subjectUncertainty-
dc.subjectUncertainty Learning-
dc.titleDebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2024.3414441-
dc.identifier.scopuseid_2-s2.0-85196496220-
dc.identifier.eissn1939-3539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats