File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Amodal Segmentation Based on Visible Region Segmentation and Shape Prior

TitleAmodal Segmentation Based on Visible Region Segmentation and Shape Prior
Authors
Issue Date2021
Citation
35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 4A, p. 2995-3003 How to Cite?
AbstractAlmost all existing amodal segmentation methods make the inferences of occluded regions by using features corresponding to the whole image. This is against the human's amodal perception, where human uses the visible part and the shape prior knowledge of the target to infer the occluded region. To mimic the behavior of the human and solve the ambiguity in the learning, we propose a framework, it firstly estimates a coarse visible mask and a coarse amodal mask. Then based on the coarse prediction, our model infers the amodal mask by concentrating on the visible region and utilizing the shape prior in the memory. In this way, features corresponding to background and occlusion can be suppressed for amodal mask estimation. Consequently, the amodal mask would not be affected by the occlusion when given the same visible regions. The leverage of shape prior makes the amodal mask estimation more robust and reasonable. Our proposed model is evaluated on three datasets. Experiments show that our proposed model outperforms existing state-of-the-art methods. The visualization of shape prior indicates that the category-specific feature in the codebook has certain interpretability. The code is available at https://github.com/YutingXiao/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior.
Persistent Identifierhttp://hdl.handle.net/10722/345135

 

DC FieldValueLanguage
dc.contributor.authorXiao, Yuting-
dc.contributor.authorXu, Yanyu-
dc.contributor.authorZhong, Ziming-
dc.contributor.authorLuo, Weixin-
dc.contributor.authorLi, Jiawei-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:25:28Z-
dc.date.available2024-08-15T09:25:28Z-
dc.date.issued2021-
dc.identifier.citation35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 4A, p. 2995-3003-
dc.identifier.urihttp://hdl.handle.net/10722/345135-
dc.description.abstractAlmost all existing amodal segmentation methods make the inferences of occluded regions by using features corresponding to the whole image. This is against the human's amodal perception, where human uses the visible part and the shape prior knowledge of the target to infer the occluded region. To mimic the behavior of the human and solve the ambiguity in the learning, we propose a framework, it firstly estimates a coarse visible mask and a coarse amodal mask. Then based on the coarse prediction, our model infers the amodal mask by concentrating on the visible region and utilizing the shape prior in the memory. In this way, features corresponding to background and occlusion can be suppressed for amodal mask estimation. Consequently, the amodal mask would not be affected by the occlusion when given the same visible regions. The leverage of shape prior makes the amodal mask estimation more robust and reasonable. Our proposed model is evaluated on three datasets. Experiments show that our proposed model outperforms existing state-of-the-art methods. The visualization of shape prior indicates that the category-specific feature in the codebook has certain interpretability. The code is available at https://github.com/YutingXiao/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior.-
dc.languageeng-
dc.relation.ispartof35th AAAI Conference on Artificial Intelligence, AAAI 2021-
dc.titleAmodal Segmentation Based on Visible Region Segmentation and Shape Prior-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85110607532-
dc.identifier.volume4A-
dc.identifier.spage2995-
dc.identifier.epage3003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats