File Download

There are no files associated with this item.

Supplementary

Conference Paper: Scene-Intuitive Agent for Remote Embodied Visual Grounding

TitleScene-Intuitive Agent for Remote Embodied Visual Grounding
Authors
Issue Date2021
Citation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021 How to Cite?
AbstractHumans learn from life events to form intuitions towards the understanding of visual environments and languages. Envision that you are instructed by a high-level instruction, 'Go to the bathroom in the master bedroom and replace the blue towel on the left wall', what would you possibly do to carry out the task? Intuitively, we comprehend the semantics of the instruction to form an overview of where a bathroom is and what a blue towel is in mind; then, we navigate to the target location by consistently matching the bathroom appearance in mind with the current scene. In this paper, we present an agent that mimics such human behaviors. Specifically, we focus on the Remote Embodied Visual Referring Expression in Real Indoor Environments task, called REVERIE, where an agent is asked to correctly localize a remote target object specified by a concise high-level natural language instruction, and propose a two-stage training pipeline. In the first stage, we pretrain the agent with two cross-modal alignment sub-tasks, namely the Scene Grounding task and the Object Grounding task. The agent learns where to stop in the Scene Grounding task and what to attend to in the Object Grounding task respectively. Then, to generate action sequences, we propose a memory-augmented attentive action decoder to smoothly fuse the pre-trained vision and language representations with the agent's past memory experiences. Without bells and whistles, experimental results show that our method outperforms previous state-of-the-art(SOTA) significantly, demonstrating the effectiveness of our method.
DescriptionPaper Session Five: Paper ID 2590
Persistent Identifierhttp://hdl.handle.net/10722/301299

 

DC FieldValueLanguage
dc.contributor.authorLin, X-
dc.contributor.authorLi, G-
dc.contributor.authorYu, Y-
dc.date.accessioned2021-07-27T08:09:04Z-
dc.date.available2021-07-27T08:09:04Z-
dc.date.issued2021-
dc.identifier.citationIEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021-
dc.identifier.urihttp://hdl.handle.net/10722/301299-
dc.descriptionPaper Session Five: Paper ID 2590-
dc.description.abstractHumans learn from life events to form intuitions towards the understanding of visual environments and languages. Envision that you are instructed by a high-level instruction, 'Go to the bathroom in the master bedroom and replace the blue towel on the left wall', what would you possibly do to carry out the task? Intuitively, we comprehend the semantics of the instruction to form an overview of where a bathroom is and what a blue towel is in mind; then, we navigate to the target location by consistently matching the bathroom appearance in mind with the current scene. In this paper, we present an agent that mimics such human behaviors. Specifically, we focus on the Remote Embodied Visual Referring Expression in Real Indoor Environments task, called REVERIE, where an agent is asked to correctly localize a remote target object specified by a concise high-level natural language instruction, and propose a two-stage training pipeline. In the first stage, we pretrain the agent with two cross-modal alignment sub-tasks, namely the Scene Grounding task and the Object Grounding task. The agent learns where to stop in the Scene Grounding task and what to attend to in the Object Grounding task respectively. Then, to generate action sequences, we propose a memory-augmented attentive action decoder to smoothly fuse the pre-trained vision and language representations with the agent's past memory experiences. Without bells and whistles, experimental results show that our method outperforms previous state-of-the-art(SOTA) significantly, demonstrating the effectiveness of our method.-
dc.languageeng-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.titleScene-Intuitive Agent for Remote Embodied Visual Grounding-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.identifier.hkuros323542-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats