File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Guest Editorial Special Section on Visual Saliency Computing and Learning

TitleGuest Editorial Special Section on Visual Saliency Computing and Learning
Authors
Issue Date2016
Citation
IEEE Transactions on Neural Networks and Learning Systems, 2016, v. 27, n. 6, p. 1118-1121 How to Cite?
AbstractVision and multimedia communities have long attempted to enable computers to understand image or video content in a manner analogous to humans. Humans' comprehension to an image or a video clip often depends on the objects that draw their attention. As a result, one fundamental and open problem is to automatically infer the attention attracting or interesting areas in an image or a video sequence. Recently, a large number of researchers explore visual saliency models to address this problem. The study on visual saliency models is originally motivated by simulating humans' bottom-up visual attention and it is mainly based on the biological evidence that humans' visual attention is automatically attracted by highly salient features in the visual scene, which are discriminative with respect to the surrounding environment.
Persistent Identifierhttp://hdl.handle.net/10722/321682
ISSN
2023 Impact Factor: 10.2
2023 SCImago Journal Rankings: 4.170
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHan, Junwei-
dc.contributor.authorShao, Ling-
dc.contributor.authorVasconcelos, Nuno-
dc.contributor.authorHan, Jungong-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:20:44Z-
dc.date.available2022-11-03T02:20:44Z-
dc.date.issued2016-
dc.identifier.citationIEEE Transactions on Neural Networks and Learning Systems, 2016, v. 27, n. 6, p. 1118-1121-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10722/321682-
dc.description.abstractVision and multimedia communities have long attempted to enable computers to understand image or video content in a manner analogous to humans. Humans' comprehension to an image or a video clip often depends on the objects that draw their attention. As a result, one fundamental and open problem is to automatically infer the attention attracting or interesting areas in an image or a video sequence. Recently, a large number of researchers explore visual saliency models to address this problem. The study on visual saliency models is originally motivated by simulating humans' bottom-up visual attention and it is mainly based on the biological evidence that humans' visual attention is automatically attracted by highly salient features in the visual scene, which are discriminative with respect to the surrounding environment.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems-
dc.titleGuest Editorial Special Section on Visual Saliency Computing and Learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TNNLS.2016.2522738-
dc.identifier.scopuseid_2-s2.0-84973129726-
dc.identifier.volume27-
dc.identifier.issue6-
dc.identifier.spage1118-
dc.identifier.epage1121-
dc.identifier.eissn2162-2388-
dc.identifier.isiWOS:000377113300001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats