File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TNNLS.2016.2522738
- Scopus: eid_2-s2.0-84973129726
- WOS: WOS:000377113300001
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: Guest Editorial Special Section on Visual Saliency Computing and Learning
| Title | Guest Editorial Special Section on Visual Saliency Computing and Learning |
|---|---|
| Authors | |
| Issue Date | 2016 |
| Citation | IEEE Transactions on Neural Networks and Learning Systems, 2016, v. 27, n. 6, p. 1118-1121 How to Cite? |
| Abstract | Vision and multimedia communities have long attempted to enable computers to understand image or video content in a manner analogous to humans. Humans' comprehension to an image or a video clip often depends on the objects that draw their attention. As a result, one fundamental and open problem is to automatically infer the attention attracting or interesting areas in an image or a video sequence. Recently, a large number of researchers explore visual saliency models to address this problem. The study on visual saliency models is originally motivated by simulating humans' bottom-up visual attention and it is mainly based on the biological evidence that humans' visual attention is automatically attracted by highly salient features in the visual scene, which are discriminative with respect to the surrounding environment. |
| Persistent Identifier | http://hdl.handle.net/10722/321682 |
| ISSN | 2023 Impact Factor: 10.2 2023 SCImago Journal Rankings: 4.170 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Han, Junwei | - |
| dc.contributor.author | Shao, Ling | - |
| dc.contributor.author | Vasconcelos, Nuno | - |
| dc.contributor.author | Han, Jungong | - |
| dc.contributor.author | Xu, Dong | - |
| dc.date.accessioned | 2022-11-03T02:20:44Z | - |
| dc.date.available | 2022-11-03T02:20:44Z | - |
| dc.date.issued | 2016 | - |
| dc.identifier.citation | IEEE Transactions on Neural Networks and Learning Systems, 2016, v. 27, n. 6, p. 1118-1121 | - |
| dc.identifier.issn | 2162-237X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/321682 | - |
| dc.description.abstract | Vision and multimedia communities have long attempted to enable computers to understand image or video content in a manner analogous to humans. Humans' comprehension to an image or a video clip often depends on the objects that draw their attention. As a result, one fundamental and open problem is to automatically infer the attention attracting or interesting areas in an image or a video sequence. Recently, a large number of researchers explore visual saliency models to address this problem. The study on visual saliency models is originally motivated by simulating humans' bottom-up visual attention and it is mainly based on the biological evidence that humans' visual attention is automatically attracted by highly salient features in the visual scene, which are discriminative with respect to the surrounding environment. | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Neural Networks and Learning Systems | - |
| dc.title | Guest Editorial Special Section on Visual Saliency Computing and Learning | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/TNNLS.2016.2522738 | - |
| dc.identifier.scopus | eid_2-s2.0-84973129726 | - |
| dc.identifier.volume | 27 | - |
| dc.identifier.issue | 6 | - |
| dc.identifier.spage | 1118 | - |
| dc.identifier.epage | 1121 | - |
| dc.identifier.eissn | 2162-2388 | - |
| dc.identifier.isi | WOS:000377113300001 | - |
