File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Spatiotemporal saliency detection via sparse representation

TitleSpatiotemporal saliency detection via sparse representation
Authors
KeywordsMotion trajectory
Sparse coding
Spatiotemporal saliency detection
Issue Date2012
Citation
Proceedings - IEEE International Conference on Multimedia and Expo, 2012, p. 158-163 How to Cite?
AbstractMultimedia applications like retrieval, copy detection etc. can gain from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatiotemporal saliency framework for videos based on sparse representation. For temporal saliency, we model the movement of the target patch as a reconstruction process, and the overlapping patches in neighboring frames are used to reconstruct the target patch. The learned coefficients encode the positions of the matched patches, which are able to represent the motion trajectory of the target patch. We also introduce a smoothing term into our sparse coding framework to learn coherent motion trajectories. Based on the psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, sparsity regularizer, and local trajectory contrast to measure the motion saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined by agreement to favor the salient regions with high confidence. Experimental results on a human fixation video dataset show our method achieved the best performance over five state-of-the-art approaches. © 2012 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/345199
ISSN
2020 SCImago Journal Rankings: 0.368

 

DC FieldValueLanguage
dc.contributor.authorRen, Zhixiang-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorRajan, Deepu-
dc.contributor.authorChia, Liang Tien-
dc.contributor.authorHuang, Yun-
dc.date.accessioned2024-08-15T09:25:51Z-
dc.date.available2024-08-15T09:25:51Z-
dc.date.issued2012-
dc.identifier.citationProceedings - IEEE International Conference on Multimedia and Expo, 2012, p. 158-163-
dc.identifier.issn1945-7871-
dc.identifier.urihttp://hdl.handle.net/10722/345199-
dc.description.abstractMultimedia applications like retrieval, copy detection etc. can gain from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatiotemporal saliency framework for videos based on sparse representation. For temporal saliency, we model the movement of the target patch as a reconstruction process, and the overlapping patches in neighboring frames are used to reconstruct the target patch. The learned coefficients encode the positions of the matched patches, which are able to represent the motion trajectory of the target patch. We also introduce a smoothing term into our sparse coding framework to learn coherent motion trajectories. Based on the psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, sparsity regularizer, and local trajectory contrast to measure the motion saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined by agreement to favor the salient regions with high confidence. Experimental results on a human fixation video dataset show our method achieved the best performance over five state-of-the-art approaches. © 2012 IEEE.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE International Conference on Multimedia and Expo-
dc.subjectMotion trajectory-
dc.subjectSparse coding-
dc.subjectSpatiotemporal saliency detection-
dc.titleSpatiotemporal saliency detection via sparse representation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICME.2012.173-
dc.identifier.scopuseid_2-s2.0-84868108802-
dc.identifier.spage158-
dc.identifier.epage163-
dc.identifier.eissn1945-788X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats