File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: SegStereo: Exploiting semantic information for disparity estimation

TitleSegStereo: Exploiting semantic information for disparity estimation
Authors
KeywordsSoftmax loss regularization
Disparity estimation
Semantic cues
Semantic feature embedding
Issue Date2018
PublisherSpringer.
Citation
15th European Conference on Computer Vision (ECCV 2018), Munich, Germany, 8-14 September 2018. In Ferrari, V, Hebert, M, Sminchisescu, C, Weiss, Y (Eds.), Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VII, p. 660-676. Cham: Springer, 2018 How to Cite?
AbstractDisparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets.
Persistent Identifierhttp://hdl.handle.net/10722/303584
ISBN
ISSN
2020 SCImago Journal Rankings: 0.249
ISI Accession Number ID
Series/Report no.Lecture Notes in Computer Science ; 11211
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 11211

 

DC FieldValueLanguage
dc.contributor.authorYang, Guorun-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorShi, Jianping-
dc.contributor.authorDeng, Zhidong-
dc.contributor.authorJia, Jiaya-
dc.date.accessioned2021-09-15T08:25:37Z-
dc.date.available2021-09-15T08:25:37Z-
dc.date.issued2018-
dc.identifier.citation15th European Conference on Computer Vision (ECCV 2018), Munich, Germany, 8-14 September 2018. In Ferrari, V, Hebert, M, Sminchisescu, C, Weiss, Y (Eds.), Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VII, p. 660-676. Cham: Springer, 2018-
dc.identifier.isbn9783030012335-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/303584-
dc.description.abstractDisparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofComputer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VII-
dc.relation.ispartofseriesLecture Notes in Computer Science ; 11211-
dc.relation.ispartofseriesImage Processing, Computer Vision, Pattern Recognition, and Graphics ; 11211-
dc.subjectSoftmax loss regularization-
dc.subjectDisparity estimation-
dc.subjectSemantic cues-
dc.subjectSemantic feature embedding-
dc.titleSegStereo: Exploiting semantic information for disparity estimation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-01234-2_39-
dc.identifier.scopuseid_2-s2.0-85055089718-
dc.identifier.spage660-
dc.identifier.epage676-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000594221500039-
dc.publisher.placeCham-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats