File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Language as queries for referring video object segmentation
Title | Language as queries for referring video object segmentation |
---|---|
Authors | |
Issue Date | 2022 |
Publisher | IEEE Computer Society. |
Citation | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Virtual), New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 4974-4984 How to Cite? |
Abstract | Referring video object segmentation (R-VOS) is an emerging cross-modal task that aims to segment the target object referred by a language expression in all video frames. In this work, we propose a simple and unified framework built upon Transformer, termed ReferFormer. It views the language as queries and directly attends to the most relevant regions in the video frames. Concretely, we introduce a small set of object queries conditioned on the language as the input to the Transformer. In this manner, all the queries are obligated to find the referred objects only. They are eventually transformed into dynamic kernels which capture the crucial object-level information, and play the role of convolution filters to generate the segmentation masks from feature maps. The object tracking is achieved naturally by linking the corresponding queries across frames. This mechanism greatly simplifies the pipeline and the endto-end framework is significantly different from the previous methods. Extensive experiments on Ref-Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB-Sentences show the effectiveness of ReferFormer. On Ref-Youtube-VOS, ReferFormer achieves 55.6 J &F with a ResNet-50 backbone without bells and whistles, which exceeds the previous state-of-the-art performance by 8.4 points. In addition, with the strong Video-Swin-Base backbone, ReferFormer achieves the best J &F of 64.9 among all existing methods. Moreover, we show the impressive results of 55.0 mAP and 43.7 mAP on A2D-Sentences and JHMDB-Sentences respectively, which significantly outperforms the previous methods by a large margin. |
Persistent Identifier | http://hdl.handle.net/10722/315549 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, J | - |
dc.contributor.author | Jiang, Y | - |
dc.contributor.author | Sun, P | - |
dc.contributor.author | Yuan, Z | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2022-08-19T08:59:57Z | - |
dc.date.available | 2022-08-19T08:59:57Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Virtual), New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 4974-4984 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315549 | - |
dc.description.abstract | Referring video object segmentation (R-VOS) is an emerging cross-modal task that aims to segment the target object referred by a language expression in all video frames. In this work, we propose a simple and unified framework built upon Transformer, termed ReferFormer. It views the language as queries and directly attends to the most relevant regions in the video frames. Concretely, we introduce a small set of object queries conditioned on the language as the input to the Transformer. In this manner, all the queries are obligated to find the referred objects only. They are eventually transformed into dynamic kernels which capture the crucial object-level information, and play the role of convolution filters to generate the segmentation masks from feature maps. The object tracking is achieved naturally by linking the corresponding queries across frames. This mechanism greatly simplifies the pipeline and the endto-end framework is significantly different from the previous methods. Extensive experiments on Ref-Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB-Sentences show the effectiveness of ReferFormer. On Ref-Youtube-VOS, ReferFormer achieves 55.6 J &F with a ResNet-50 backbone without bells and whistles, which exceeds the previous state-of-the-art performance by 8.4 points. In addition, with the strong Video-Swin-Base backbone, ReferFormer achieves the best J &F of 64.9 among all existing methods. Moreover, we show the impressive results of 55.0 mAP and 43.7 mAP on A2D-Sentences and JHMDB-Sentences respectively, which significantly outperforms the previous methods by a large margin. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. | - |
dc.relation.ispartof | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 | - |
dc.rights | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Copyright © IEEE Computer Society. | - |
dc.title | Language as queries for referring video object segmentation | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 335581 | - |
dc.identifier.spage | 4974 | - |
dc.identifier.epage | 4984 | - |
dc.publisher.place | United States | - |