File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2017.2669880
- Scopus: eid_2-s2.0-85018522105
- PMID: 28212087
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Dual Deep Network for Visual Tracking
Title | Dual Deep Network for Visual Tracking |
---|---|
Authors | |
Keywords | deep neural network independent component analysis with reference Visual tracking |
Issue Date | 2017 |
Citation | IEEE Transactions on Image Processing, 2017, v. 26, n. 4, p. 2005-2015 How to Cite? |
Abstract | Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts. |
Persistent Identifier | http://hdl.handle.net/10722/351376 |
ISSN | 2023 Impact Factor: 10.8 2023 SCImago Journal Rankings: 3.556 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chi, Zhizhen | - |
dc.contributor.author | Li, Hongyang | - |
dc.contributor.author | Lu, Huchuan | - |
dc.contributor.author | Yang, Ming Hsuan | - |
dc.date.accessioned | 2024-11-20T03:55:55Z | - |
dc.date.available | 2024-11-20T03:55:55Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | IEEE Transactions on Image Processing, 2017, v. 26, n. 4, p. 2005-2015 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351376 | - |
dc.description.abstract | Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Image Processing | - |
dc.subject | deep neural network | - |
dc.subject | independent component analysis with reference | - |
dc.subject | Visual tracking | - |
dc.title | Dual Deep Network for Visual Tracking | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TIP.2017.2669880 | - |
dc.identifier.pmid | 28212087 | - |
dc.identifier.scopus | eid_2-s2.0-85018522105 | - |
dc.identifier.volume | 26 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 2005 | - |
dc.identifier.epage | 2015 | - |