File Download
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Deep contrast learning for salient object detection
Title | Deep contrast learning for salient object detection |
---|---|
Authors | |
Issue Date | 2016 |
Publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147 |
Citation | The 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV., 26 June-1 July 2016. In Conference Proceedings, 2016, p. 1-10 How to Cite? |
Abstract | Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art. |
Persistent Identifier | http://hdl.handle.net/10722/229718 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, G | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2016-08-23T14:12:51Z | - |
dc.date.available | 2016-08-23T14:12:51Z | - |
dc.date.issued | 2016 | - |
dc.identifier.citation | The 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV., 26 June-1 July 2016. In Conference Proceedings, 2016, p. 1-10 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/229718 | - |
dc.description.abstract | Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147 | - |
dc.relation.ispartof | IEEE Conference on Computer Vision and Pattern Recognition Proceedings | - |
dc.rights | IEEE Conference on Computer Vision and Pattern Recognition Proceedings. Copyright © IEEE Computer Society. | - |
dc.rights | ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.title | Deep contrast learning for salient object detection | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Li, G: gbli@cs.hku.hk | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.description.nature | postprint | - |
dc.identifier.hkuros | 262366 | - |
dc.identifier.spage | 1 | - |
dc.identifier.epage | 10 | - |
dc.publisher.place | United States | - |
dc.customcontrol.immutable | sml 160914 | - |
dc.identifier.issnl | 1063-6919 | - |