File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Human-centric Spatio-Temporal Video Grounding With Visual Transformers

TitleHuman-centric Spatio-Temporal Video Grounding With Visual Transformers
Authors
KeywordsAnnotations
dataset
Electron tubes
Grounding
Location awareness
Proposals
Spatio-Temporal grounding
Task analysis
transformer
Visualization
Issue Date2021
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2021 How to Cite?
AbstractIn this work, we introduce a novel task – Human-centric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatio-temporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security related applications, where the surveillance videos can be extremely long but only a specific person during a specific period is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG datasetThe new dataset is available at https://github.com/tzhhhh123/HC-STVG. consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating that the newly-proposed method outperforms the existing baseline methods.
Persistent Identifierhttp://hdl.handle.net/10722/321940
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTang, Zongheng-
dc.contributor.authorLiao, Yue-
dc.contributor.authorLiu, Si-
dc.contributor.authorLi, Guanbin-
dc.contributor.authorJin, Xiaojie-
dc.contributor.authorJiang, Hongxu-
dc.contributor.authorYu, Qian-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:22:30Z-
dc.date.available2022-11-03T02:22:30Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2021-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/321940-
dc.description.abstractIn this work, we introduce a novel task – Human-centric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatio-temporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security related applications, where the surveillance videos can be extremely long but only a specific person during a specific period is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG datasetThe new dataset is available at https://github.com/tzhhhh123/HC-STVG. consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating that the newly-proposed method outperforms the existing baseline methods.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectAnnotations-
dc.subjectdataset-
dc.subjectElectron tubes-
dc.subjectGrounding-
dc.subjectLocation awareness-
dc.subjectProposals-
dc.subjectSpatio-Temporal grounding-
dc.subjectTask analysis-
dc.subjecttransformer-
dc.subjectVisualization-
dc.titleHuman-centric Spatio-Temporal Video Grounding With Visual Transformers-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2021.3085907-
dc.identifier.scopuseid_2-s2.0-85107359010-
dc.identifier.eissn1558-2205-
dc.identifier.isiWOS:000936985600013-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats