File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Removing label ambiguity in learning-based visual saliency estimation

TitleRemoving label ambiguity in learning-based visual saliency estimation
Authors
KeywordsLabel ambiguity
learning to rank
multi-instance learning (MIL)
visual saliency
Issue Date2012
Citation
IEEE Transactions on Image Processing, 2012, v. 21, n. 4, p. 1513-1525 How to Cite?
AbstractVisual saliency is a useful clue to depict visually important image/video contents in many multimedia applications. In visual saliency estimation, a feasible solution is to learn a feature-saliency mapping model from the user data obtained by manually labeling activities or eye-tracking devices. However, label ambiguities may also arise due to the inaccurate and inadequate user data. To process the noisy training data, we propose a multi-instance learning to rank approach for visual saliency estimation. In our approach, the correlations between various image patches are incorporated into an ordinal regression framework. By iteratively refining a ranking model and relabeling the image patches with respect to their mutual correlations, the label ambiguities can be effectively removed from the training data. Consequently, visual saliency can be effectively estimated by the ranking model, which can pop out real targets and suppress real distractors. Extensive experiments on two public image data sets show that our approach outperforms 11 state-of-the-art methods remarkably in visual saliency estimation. © 2011 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/321459
ISSN
2021 Impact Factor: 11.041
2020 SCImago Journal Rankings: 1.778
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, Jia-
dc.contributor.authorXu, Dong-
dc.contributor.authorGao, Wen-
dc.date.accessioned2022-11-03T02:19:04Z-
dc.date.available2022-11-03T02:19:04Z-
dc.date.issued2012-
dc.identifier.citationIEEE Transactions on Image Processing, 2012, v. 21, n. 4, p. 1513-1525-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/321459-
dc.description.abstractVisual saliency is a useful clue to depict visually important image/video contents in many multimedia applications. In visual saliency estimation, a feasible solution is to learn a feature-saliency mapping model from the user data obtained by manually labeling activities or eye-tracking devices. However, label ambiguities may also arise due to the inaccurate and inadequate user data. To process the noisy training data, we propose a multi-instance learning to rank approach for visual saliency estimation. In our approach, the correlations between various image patches are incorporated into an ordinal regression framework. By iteratively refining a ranking model and relabeling the image patches with respect to their mutual correlations, the label ambiguities can be effectively removed from the training data. Consequently, visual saliency can be effectively estimated by the ranking model, which can pop out real targets and suppress real distractors. Extensive experiments on two public image data sets show that our approach outperforms 11 state-of-the-art methods remarkably in visual saliency estimation. © 2011 IEEE.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectLabel ambiguity-
dc.subjectlearning to rank-
dc.subjectmulti-instance learning (MIL)-
dc.subjectvisual saliency-
dc.titleRemoving label ambiguity in learning-based visual saliency estimation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2011.2179665-
dc.identifier.pmid22180509-
dc.identifier.scopuseid_2-s2.0-84859075549-
dc.identifier.volume21-
dc.identifier.issue4-
dc.identifier.spage1513-
dc.identifier.epage1525-
dc.identifier.isiWOS:000302181800008-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats