File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

TitleLAVT: Language-Aware Vision Transformer for Referring Image Segmentation
Authors
Keywordsgrouping and shape analysis
Segmentation
Vision + language
Issue Date2022
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 18134-18144 How to Cite?
AbstractReferring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image. A paradigm for tackling this problem is to leverage a powerful vision-language ('cross-madal') decoder to fuse features independently extracted from a vision encoder and a language encoder. Recent methods have made remarkable advancements in this paradigm by exploiting Transformers as cross-modal decoders, concurrent to the Transformer's overwhelming success in many other vision-language tasks. Adopting a different approach in this work, we show that significantly better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in intermediate layers of a vision Transformer encoder network. By conducting cross-modal feature fusion in the visual feature encoding stage, we can leverage the well-proven correlation modeling power of a Transformer encoder for excavating helpful multi-modal context. This way, accurate segmentation results are readily harvested with a light-weight mask predictor. Without bells and whistles, our method surpasses the previous state-of-the-art methods on Ref CoCo, RefCOCO+, and G-Ref by large margins.
Persistent Identifierhttp://hdl.handle.net/10722/333534
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhao-
dc.contributor.authorWang, Jiaqi-
dc.contributor.authorTang, Yansong-
dc.contributor.authorChen, Kai-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorTorr, Philip H.S.-
dc.date.accessioned2023-10-06T05:20:15Z-
dc.date.available2023-10-06T05:20:15Z-
dc.date.issued2022-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 18134-18144-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/333534-
dc.description.abstractReferring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image. A paradigm for tackling this problem is to leverage a powerful vision-language ('cross-madal') decoder to fuse features independently extracted from a vision encoder and a language encoder. Recent methods have made remarkable advancements in this paradigm by exploiting Transformers as cross-modal decoders, concurrent to the Transformer's overwhelming success in many other vision-language tasks. Adopting a different approach in this work, we show that significantly better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in intermediate layers of a vision Transformer encoder network. By conducting cross-modal feature fusion in the visual feature encoding stage, we can leverage the well-proven correlation modeling power of a Transformer encoder for excavating helpful multi-modal context. This way, accurate segmentation results are readily harvested with a light-weight mask predictor. Without bells and whistles, our method surpasses the previous state-of-the-art methods on Ref CoCo, RefCOCO+, and G-Ref by large margins.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectgrouping and shape analysis-
dc.subjectSegmentation-
dc.subjectVision + language-
dc.titleLAVT: Language-Aware Vision Transformer for Referring Image Segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52688.2022.01762-
dc.identifier.scopuseid_2-s2.0-85128285110-
dc.identifier.volume2022-June-
dc.identifier.spage18134-
dc.identifier.epage18144-
dc.identifier.isiWOS:000870783003094-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats