File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep Dual Learning for Semantic Image Segmentation

TitleDeep Dual Learning for Semantic Image Segmentation
Authors
Issue Date2017
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2017, v. 2017-October, p. 2737-2745 How to Cite?
Abstract© 2017 IEEE. Deep neural networks have advanced many computer vision tasks, because of their compelling capacities to learn from large amount of labeled data. However, their performances are not fully exploited in semantic image segmentation as the scale of training set is limited, where perpixel labelmaps are expensive to obtain. To reduce labeling efforts, a natural solution is to collect additional images from Internet that are associated with image-level tags. Unlike existing works that treated labelmaps and tags as independent supervisions, we present a novel learning setting, namely dual image segmentation (DIS), which consists of two complementary learning problems that are jointly solved. One predicts labelmaps and tags from images, and the other reconstructs the images using the predicted labelmaps. DIS has three appealing properties. 1) Given an image with tags only, its labelmap can be inferred by leveraging the images and tags as constraints. The estimated labelmaps that capture accurate object classes and boundaries are used as ground truths in training to boost performance. 2) DIS is able to clean tags that have noises. 3) DIS significantly reduces the number of perpixel annotations in training, while still achieves state-ofthe- art performance. Extensive experiments demonstrate the effectiveness of DIS, which outperforms an existing bestperforming baseline by 12.6% on Pascal VOC 2012 test set, without any post-processing such as CRF/MRF smoothing.
Persistent Identifierhttp://hdl.handle.net/10722/273609
ISSN

 

DC FieldValueLanguage
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Guangrun-
dc.contributor.authorLin, Liang-
dc.contributor.authorWang, Xiaogang-
dc.date.accessioned2019-08-12T09:56:08Z-
dc.date.available2019-08-12T09:56:08Z-
dc.date.issued2017-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2017, v. 2017-October, p. 2737-2745-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/273609-
dc.description.abstract© 2017 IEEE. Deep neural networks have advanced many computer vision tasks, because of their compelling capacities to learn from large amount of labeled data. However, their performances are not fully exploited in semantic image segmentation as the scale of training set is limited, where perpixel labelmaps are expensive to obtain. To reduce labeling efforts, a natural solution is to collect additional images from Internet that are associated with image-level tags. Unlike existing works that treated labelmaps and tags as independent supervisions, we present a novel learning setting, namely dual image segmentation (DIS), which consists of two complementary learning problems that are jointly solved. One predicts labelmaps and tags from images, and the other reconstructs the images using the predicted labelmaps. DIS has three appealing properties. 1) Given an image with tags only, its labelmap can be inferred by leveraging the images and tags as constraints. The estimated labelmaps that capture accurate object classes and boundaries are used as ground truths in training to boost performance. 2) DIS is able to clean tags that have noises. 3) DIS significantly reduces the number of perpixel annotations in training, while still achieves state-ofthe- art performance. Extensive experiments demonstrate the effectiveness of DIS, which outperforms an existing bestperforming baseline by 12.6% on Pascal VOC 2012 test set, without any post-processing such as CRF/MRF smoothing.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleDeep Dual Learning for Semantic Image Segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2017.296-
dc.identifier.scopuseid_2-s2.0-85041917428-
dc.identifier.volume2017-October-
dc.identifier.spage2737-
dc.identifier.epage2745-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats