File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2022.3200416
- Scopus: eid_2-s2.0-85137574549
- PMID: 35994543
- WOS: WOS:000947840300037
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Fully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision
Title | Fully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision |
---|---|
Authors | |
Keywords | Fully convolutional networks panoptic segmentation point-based supervision unified representation |
Issue Date | 2023 |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 4, p. 4552-4568 How to Cite? |
Abstract | In this paper, we present a conceptually simple, strong, and efficient framework for fully- and weakly-supervised panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline, which can be optimized with point-based fully or weak supervision. In particular, Panoptic FCN encodes each object instance or stuff category with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms the previous box-based and -free models with high efficiency. Furthermore, we propose a new form of point-based annotation for weakly-supervised panoptic segmentation. It only needs several random points for both things and stuff, which dramatically reduces the annotation cost of human. The proposed Panoptic FCN is also proved to have much superior performance in this weakly-supervised setting, which achieves 82% of the fully-supervised performance with only 20 randomly annotated points per instance. Extensive experiments demonstrate the effectiveness and efficiency of Panoptic FCN on COCO, VOC 2012, Cityscapes, and Mapillary Vistas datasets. And it sets up a new leading benchmark for both fully- and weakly-supervised panoptic segmentation. |
Persistent Identifier | http://hdl.handle.net/10722/333554 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Yanwei | - |
dc.contributor.author | Zhao, Hengshuang | - |
dc.contributor.author | Qi, Xiaojuan | - |
dc.contributor.author | Chen, Yukang | - |
dc.contributor.author | Qi, Lu | - |
dc.contributor.author | Wang, Liwei | - |
dc.contributor.author | Li, Zeming | - |
dc.contributor.author | Sun, Jian | - |
dc.contributor.author | Jia, Jiaya | - |
dc.date.accessioned | 2023-10-06T05:20:29Z | - |
dc.date.available | 2023-10-06T05:20:29Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 4, p. 4552-4568 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333554 | - |
dc.description.abstract | In this paper, we present a conceptually simple, strong, and efficient framework for fully- and weakly-supervised panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline, which can be optimized with point-based fully or weak supervision. In particular, Panoptic FCN encodes each object instance or stuff category with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms the previous box-based and -free models with high efficiency. Furthermore, we propose a new form of point-based annotation for weakly-supervised panoptic segmentation. It only needs several random points for both things and stuff, which dramatically reduces the annotation cost of human. The proposed Panoptic FCN is also proved to have much superior performance in this weakly-supervised setting, which achieves 82% of the fully-supervised performance with only 20 randomly annotated points per instance. Extensive experiments demonstrate the effectiveness and efficiency of Panoptic FCN on COCO, VOC 2012, Cityscapes, and Mapillary Vistas datasets. And it sets up a new leading benchmark for both fully- and weakly-supervised panoptic segmentation. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.subject | Fully convolutional networks | - |
dc.subject | panoptic segmentation | - |
dc.subject | point-based supervision | - |
dc.subject | unified representation | - |
dc.title | Fully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPAMI.2022.3200416 | - |
dc.identifier.pmid | 35994543 | - |
dc.identifier.scopus | eid_2-s2.0-85137574549 | - |
dc.identifier.volume | 45 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 4552 | - |
dc.identifier.epage | 4568 | - |
dc.identifier.eissn | 1939-3539 | - |
dc.identifier.isi | WOS:000947840300037 | - |