File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Dual Deep Network for Visual Tracking

TitleDual Deep Network for Visual Tracking
Authors
Keywordsdeep neural network
independent component analysis with reference
Visual tracking
Issue Date2017
Citation
IEEE Transactions on Image Processing, 2017, v. 26, n. 4, p. 2005-2015 How to Cite?
AbstractVisual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.
Persistent Identifierhttp://hdl.handle.net/10722/351376
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556

 

DC FieldValueLanguage
dc.contributor.authorChi, Zhizhen-
dc.contributor.authorLi, Hongyang-
dc.contributor.authorLu, Huchuan-
dc.contributor.authorYang, Ming Hsuan-
dc.date.accessioned2024-11-20T03:55:55Z-
dc.date.available2024-11-20T03:55:55Z-
dc.date.issued2017-
dc.identifier.citationIEEE Transactions on Image Processing, 2017, v. 26, n. 4, p. 2005-2015-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/351376-
dc.description.abstractVisual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectdeep neural network-
dc.subjectindependent component analysis with reference-
dc.subjectVisual tracking-
dc.titleDual Deep Network for Visual Tracking-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2017.2669880-
dc.identifier.pmid28212087-
dc.identifier.scopuseid_2-s2.0-85018522105-
dc.identifier.volume26-
dc.identifier.issue4-
dc.identifier.spage2005-
dc.identifier.epage2015-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats