File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2017.567
- Scopus: eid_2-s2.0-85040645052
- WOS: WOS:000418371405046
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: SPFTN: A self-paced fine-tuning network for segmenting objects in weakly labelled videos
Title | SPFTN: A self-paced fine-tuning network for segmenting objects in weakly labelled videos |
---|---|
Authors | |
Issue Date | 2017 |
Citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5340-5348 How to Cite? |
Abstract | Object segmentation in weakly labelled videos is an interesting yet challenging task, which aims at learning to perform category-specific video object segmentation by only using video-level tags. Existing works in this research area might still have some limitations, e.g., lack of effective DNN-based learning frameworks, under-exploring the context information, and requiring to leverage the unstable negative video collection, which prevent them from obtaining more promising performance. To this end, we propose a novel self-paced fine-tuning network (SPFTN)-based framework, which could learn to explore the context information within the video frames and capture adequate object semantics without using the negative videos. To perform weakly supervised learning based on the deep neural network, we make the earliest effort to integrate the self-paced learning regime and the deep neural network into a unified and compatible framework, leading to the self-paced fine-tuning network. Comprehensive experiments on the large-scale YouTube-Objects and DAVIS datasets demonstrate that the proposed approach achieves superior performance as compared with other state-of-the-art methods as well as the baseline networks and models. |
Persistent Identifier | http://hdl.handle.net/10722/321212 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Dingwen | - |
dc.contributor.author | Yang, Le | - |
dc.contributor.author | Meng, Deyu | - |
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Han, Junwei | - |
dc.date.accessioned | 2022-11-03T02:17:23Z | - |
dc.date.available | 2022-11-03T02:17:23Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5340-5348 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321212 | - |
dc.description.abstract | Object segmentation in weakly labelled videos is an interesting yet challenging task, which aims at learning to perform category-specific video object segmentation by only using video-level tags. Existing works in this research area might still have some limitations, e.g., lack of effective DNN-based learning frameworks, under-exploring the context information, and requiring to leverage the unstable negative video collection, which prevent them from obtaining more promising performance. To this end, we propose a novel self-paced fine-tuning network (SPFTN)-based framework, which could learn to explore the context information within the video frames and capture adequate object semantics without using the negative videos. To perform weakly supervised learning based on the deep neural network, we make the earliest effort to integrate the self-paced learning regime and the deep neural network into a unified and compatible framework, leading to the self-paced fine-tuning network. Comprehensive experiments on the large-scale YouTube-Objects and DAVIS datasets demonstrate that the proposed approach achieves superior performance as compared with other state-of-the-art methods as well as the baseline networks and models. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 | - |
dc.title | SPFTN: A self-paced fine-tuning network for segmenting objects in weakly labelled videos | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2017.567 | - |
dc.identifier.scopus | eid_2-s2.0-85040645052 | - |
dc.identifier.volume | 2017-January | - |
dc.identifier.spage | 5340 | - |
dc.identifier.epage | 5348 | - |
dc.identifier.isi | WOS:000418371405046 | - |