File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: RGBD Salient Object Detection via Deep Fusion

TitleRGBD Salient Object Detection via Deep Fusion
Authors
Keywordsconvolutional neural network
Laplacian propagation
RGBD saliency detection
Issue Date2017
Citation
IEEE Transactions on Image Processing, 2017, v. 26, n. 5, p. 2274-2285 How to Cite?
AbstractNumerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
Persistent Identifierhttp://hdl.handle.net/10722/325350
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorQu, Liangqiong-
dc.contributor.authorHe, Shengfeng-
dc.contributor.authorZhang, Jiawei-
dc.contributor.authorTian, Jiandong-
dc.contributor.authorTang, Yandong-
dc.contributor.authorYang, Qingxiong-
dc.date.accessioned2023-02-27T07:31:46Z-
dc.date.available2023-02-27T07:31:46Z-
dc.date.issued2017-
dc.identifier.citationIEEE Transactions on Image Processing, 2017, v. 26, n. 5, p. 2274-2285-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/325350-
dc.description.abstractNumerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectconvolutional neural network-
dc.subjectLaplacian propagation-
dc.subjectRGBD saliency detection-
dc.titleRGBD Salient Object Detection via Deep Fusion-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2017.2682981-
dc.identifier.pmid28320666-
dc.identifier.scopuseid_2-s2.0-85018502263-
dc.identifier.volume26-
dc.identifier.issue5-
dc.identifier.spage2274-
dc.identifier.epage2285-
dc.identifier.isiWOS:000399396400015-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats