File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation

TitleAssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation
Authors
Issue Date2023
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 3228-3238 How to Cite?
AbstractBoth indoor and outdoor environments are inherently structured and repetitive. Traditional modeling pipelines keep an asset library storing unique object templates, which is both versatile and memory efficient in practice. Inspired by this observation, we propose AssetField, a novel neural scene representation that learns a set of object-aware ground feature planes to represent the scene, where an asset library storing template feature patches can be constructed in an unsupervised manner. Unlike existing methods which require object masks to query spatial points for object editing, our ground feature plane representation offers a natural visualization of the scene in the bird-eye view, allowing a variety of operations (e.g. translation, duplication, deformation) on objects to configure a new scene. With the template feature patches, group editing is enabled for scenes with many recurring items to avoid repetitive work on object individuals. We show that AssetField not only achieves competitive performance for novel-view synthesis but also generates realistic renderings for new scene configurations.
Persistent Identifierhttp://hdl.handle.net/10722/352380
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorXiangli, Yuanbo-
dc.contributor.authorXu, Linning-
dc.contributor.authorPan, Xingang-
dc.contributor.authorZhao, Nanxuan-
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.date.accessioned2024-12-16T03:58:34Z-
dc.date.available2024-12-16T03:58:34Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2023, p. 3228-3238-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/352380-
dc.description.abstractBoth indoor and outdoor environments are inherently structured and repetitive. Traditional modeling pipelines keep an asset library storing unique object templates, which is both versatile and memory efficient in practice. Inspired by this observation, we propose AssetField, a novel neural scene representation that learns a set of object-aware ground feature planes to represent the scene, where an asset library storing template feature patches can be constructed in an unsupervised manner. Unlike existing methods which require object masks to query spatial points for object editing, our ground feature plane representation offers a natural visualization of the scene in the bird-eye view, allowing a variety of operations (e.g. translation, duplication, deformation) on objects to configure a new scene. With the template feature patches, group editing is enabled for scenes with many recurring items to avoid repetitive work on object individuals. We show that AssetField not only achieves competitive performance for novel-view synthesis but also generates realistic renderings for new scene configurations.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleAssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV51070.2023.00301-
dc.identifier.scopuseid_2-s2.0-85169019940-
dc.identifier.spage3228-
dc.identifier.epage3238-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats