File Download
Supplementary

Conference Paper: Cops-Ref: A New Dataset and Task on Compositional Referring Expression Comprehension

TitleCops-Ref: A New Dataset and Task on Compositional Referring Expression Comprehension
Authors
Issue Date2020
PublisherIEEE Computer Society
Citation
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 14-19 June 2020, p. 10086-10095 How to Cite?
AbstractReferring expression comprehension (REF) aims at identifying a particular object in a scene by a natural language expression. It requires joint reasoning over the textual and visual domains to solve the problem. Some popular referring expression datasets, however, fail to provide an ideal test bed for evaluating the reasoning ability of the models, mainly because 1) their expressions typically describe only some simple distinctive properties of the object and 2) their images contain limited distracting information. To bridge the gap, we propose a new dataset for visual reasoning in context of referring expression comprehension with two main features. First, we design a novel expression engine rendering various reasoning logics that can be flexibly combined with rich visual properties to generate expressions with varying compositionality. Second, to better exploit the full reasoning chain embodied in an expression, we propose a new test setting by adding additional distracting images containing objects sharing similar properties with the referent, thus minimising the success rate of reasoning-free cross-domain alignment. We evaluate several state-of-the-art REF models, but find none of them can achieve promising performance. A proposed modular hard mining strategy performs the best but still leaves substantial room for improvement. We hope this new dataset and task can serve as a benchmark for deeper visual reasoning analysis and foster the research on referring expression comprehension.
DescriptionSession: Poster 3.1 — Recognition (Detection, Categorization); Video Analysis and Understanding; Vision + Language - Poster no. 39 ; Paper ID: 206
CVPR 2020 was held virtually due to COVID-19
Persistent Identifierhttp://hdl.handle.net/10722/281712

 

DC FieldValueLanguage
dc.contributor.authorChen, Z-
dc.contributor.authorWang, P-
dc.contributor.authorMa, L-
dc.contributor.authorWong, KKY-
dc.contributor.authorWu, Q-
dc.date.accessioned2020-03-22T04:18:37Z-
dc.date.available2020-03-22T04:18:37Z-
dc.date.issued2020-
dc.identifier.citationIEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 14-19 June 2020, p. 10086-10095-
dc.identifier.urihttp://hdl.handle.net/10722/281712-
dc.descriptionSession: Poster 3.1 — Recognition (Detection, Categorization); Video Analysis and Understanding; Vision + Language - Poster no. 39 ; Paper ID: 206-
dc.descriptionCVPR 2020 was held virtually due to COVID-19-
dc.description.abstractReferring expression comprehension (REF) aims at identifying a particular object in a scene by a natural language expression. It requires joint reasoning over the textual and visual domains to solve the problem. Some popular referring expression datasets, however, fail to provide an ideal test bed for evaluating the reasoning ability of the models, mainly because 1) their expressions typically describe only some simple distinctive properties of the object and 2) their images contain limited distracting information. To bridge the gap, we propose a new dataset for visual reasoning in context of referring expression comprehension with two main features. First, we design a novel expression engine rendering various reasoning logics that can be flexibly combined with rich visual properties to generate expressions with varying compositionality. Second, to better exploit the full reasoning chain embodied in an expression, we propose a new test setting by adding additional distracting images containing objects sharing similar properties with the referent, thus minimising the success rate of reasoning-free cross-domain alignment. We evaluate several state-of-the-art REF models, but find none of them can achieve promising performance. A proposed modular hard mining strategy performs the best but still leaves substantial room for improvement. We hope this new dataset and task can serve as a benchmark for deeper visual reasoning analysis and foster the research on referring expression comprehension.-
dc.languageeng-
dc.publisherIEEE Computer Society-
dc.relation.ispartofIEEE International Conference on Computer Vision and Pattern Recognition-
dc.rightsIEEE Conference on Computer Vision and Pattern Recognition. Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.titleCops-Ref: A New Dataset and Task on Compositional Referring Expression Comprehension-
dc.typeConference_Paper-
dc.identifier.emailWong, KKY: kykwong@cs.hku.hk-
dc.identifier.authorityWong, KKY=rp01393-
dc.description.naturepostprint-
dc.identifier.hkuros309424-
dc.identifier.hkuros310869-
dc.identifier.spage10086-
dc.identifier.epage10095-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats