File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Fully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision

TitleFully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision
Authors
KeywordsFully convolutional networks
panoptic segmentation
point-based supervision
unified representation
Issue Date2023
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 4, p. 4552-4568 How to Cite?
AbstractIn this paper, we present a conceptually simple, strong, and efficient framework for fully- and weakly-supervised panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline, which can be optimized with point-based fully or weak supervision. In particular, Panoptic FCN encodes each object instance or stuff category with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms the previous box-based and -free models with high efficiency. Furthermore, we propose a new form of point-based annotation for weakly-supervised panoptic segmentation. It only needs several random points for both things and stuff, which dramatically reduces the annotation cost of human. The proposed Panoptic FCN is also proved to have much superior performance in this weakly-supervised setting, which achieves 82% of the fully-supervised performance with only 20 randomly annotated points per instance. Extensive experiments demonstrate the effectiveness and efficiency of Panoptic FCN on COCO, VOC 2012, Cityscapes, and Mapillary Vistas datasets. And it sets up a new leading benchmark for both fully- and weakly-supervised panoptic segmentation.
Persistent Identifierhttp://hdl.handle.net/10722/333554
ISSN
2021 Impact Factor: 24.314
2020 SCImago Journal Rankings: 3.811

 

DC FieldValueLanguage
dc.contributor.authorLi, Yanwei-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorQi, Xiaojuan-
dc.contributor.authorChen, Yukang-
dc.contributor.authorQi, Lu-
dc.contributor.authorWang, Liwei-
dc.contributor.authorLi, Zeming-
dc.contributor.authorSun, Jian-
dc.contributor.authorJia, Jiaya-
dc.date.accessioned2023-10-06T05:20:29Z-
dc.date.available2023-10-06T05:20:29Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 4, p. 4552-4568-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/333554-
dc.description.abstractIn this paper, we present a conceptually simple, strong, and efficient framework for fully- and weakly-supervised panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline, which can be optimized with point-based fully or weak supervision. In particular, Panoptic FCN encodes each object instance or stuff category with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms the previous box-based and -free models with high efficiency. Furthermore, we propose a new form of point-based annotation for weakly-supervised panoptic segmentation. It only needs several random points for both things and stuff, which dramatically reduces the annotation cost of human. The proposed Panoptic FCN is also proved to have much superior performance in this weakly-supervised setting, which achieves 82% of the fully-supervised performance with only 20 randomly annotated points per instance. Extensive experiments demonstrate the effectiveness and efficiency of Panoptic FCN on COCO, VOC 2012, Cityscapes, and Mapillary Vistas datasets. And it sets up a new leading benchmark for both fully- and weakly-supervised panoptic segmentation.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectFully convolutional networks-
dc.subjectpanoptic segmentation-
dc.subjectpoint-based supervision-
dc.subjectunified representation-
dc.titleFully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2022.3200416-
dc.identifier.pmid35994543-
dc.identifier.scopuseid_2-s2.0-85137574549-
dc.identifier.volume45-
dc.identifier.issue4-
dc.identifier.spage4552-
dc.identifier.epage4568-
dc.identifier.eissn1939-3539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats