File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Regularized feature reconstruction for spatio-temporal saliency detection

TitleRegularized feature reconstruction for spatio-temporal saliency detection
Authors
Keywordsfeature reconstruction
motion trajectory
Spatio-temporal saliency detection
Issue Date2013
Citation
IEEE Transactions on Image Processing, 2013, v. 22, n. 8, p. 3120-3132 How to Cite?
AbstractMultimedia applications such as image or video retrieval, copy detection, and so forth can benefit from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatio-temporal saliency detection framework on the basis of regularized feature reconstruction. Specifically, for video saliency detection, both the temporal and spatial saliency detection are considered. For temporal saliency, we model the movement of the target patch as a reconstruction process using the patches in neighboring frames. A Laplacian smoothing term is introduced to model the coherent motion trajectories. With psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, regularizer, and local trajectory contrast to measure the temporal saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined together to favor salient regions with high confidence for video saliency detection. We also apply the spatial saliency part of the spatio-temporal model to image saliency detection. Experimental results on a human fixation video dataset and an image saliency detection dataset show that our method achieves the best performance over several state-of-the-art approaches. © 1992-2012 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/345206
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556

 

DC FieldValueLanguage
dc.contributor.authorRen, Zhixiang-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorChia, Liang Tien-
dc.contributor.authorRajan, Deepu-
dc.date.accessioned2024-08-15T09:25:54Z-
dc.date.available2024-08-15T09:25:54Z-
dc.date.issued2013-
dc.identifier.citationIEEE Transactions on Image Processing, 2013, v. 22, n. 8, p. 3120-3132-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/345206-
dc.description.abstractMultimedia applications such as image or video retrieval, copy detection, and so forth can benefit from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatio-temporal saliency detection framework on the basis of regularized feature reconstruction. Specifically, for video saliency detection, both the temporal and spatial saliency detection are considered. For temporal saliency, we model the movement of the target patch as a reconstruction process using the patches in neighboring frames. A Laplacian smoothing term is introduced to model the coherent motion trajectories. With psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, regularizer, and local trajectory contrast to measure the temporal saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined together to favor salient regions with high confidence for video saliency detection. We also apply the spatial saliency part of the spatio-temporal model to image saliency detection. Experimental results on a human fixation video dataset and an image saliency detection dataset show that our method achieves the best performance over several state-of-the-art approaches. © 1992-2012 IEEE.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectfeature reconstruction-
dc.subjectmotion trajectory-
dc.subjectSpatio-temporal saliency detection-
dc.titleRegularized feature reconstruction for spatio-temporal saliency detection-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2013.2259837-
dc.identifier.pmid23743773-
dc.identifier.scopuseid_2-s2.0-84879035816-
dc.identifier.volume22-
dc.identifier.issue8-
dc.identifier.spage3120-
dc.identifier.epage3132-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats