File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Dense Video Captioning Using Graph-Based Sentence Summarization

TitleDense Video Captioning Using Graph-Based Sentence Summarization
Authors
KeywordsDense video captioning
graph convolutional network
sentence summarization
Issue Date2021
Citation
IEEE Transactions on Multimedia, 2021, v. 23, p. 1799-1810 How to Cite?
AbstractRecently, dense video captioning has made attractive progress in detecting and captioning all events in a long untrimmed video. Despite promising results were achieved, most existing methods do not sufficiently explore the scene evolution within an event temporal proposal for captioning, and therefore perform less satisfactorily when the scenes and objects change over a relatively long proposal. To address this problem, we propose a graph-based partition-and-summarization (GPaS) framework for dense video captioning within two stages. For the 'partition' stage, a whole event proposal is split into short video segments for captioning at a finer level. For the 'summarization' stage, the generated sentences carrying rich description information for each segment are summarized into one sentence to describe the whole event. We particularly focus on the 'summarization' stage, and propose a framework that effectively exploits the relationship between semantic words for summarization. We achieve this goal by treating semantic words as the nodes in a graph and learning their interactions by coupling Graph Convolutional Network (GCN) and Long Short Term Memory (LSTM), with the aid of visual cues. Two schemes of GCN-LSTM Interaction (GLI) modules are proposed for seamless integration of GCN and LSTM. The effectiveness of our approach is demonstrated via an extensive comparison with the state-of-the-arts methods on the two benchmarks ActivityNet Captions dataset and YouCook II dataset.
Persistent Identifierhttp://hdl.handle.net/10722/321939
ISSN
2023 Impact Factor: 8.4
2023 SCImago Journal Rankings: 2.260
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Zhiwang-
dc.contributor.authorXu, Dong-
dc.contributor.authorOuyang, Wanli-
dc.contributor.authorZhou, Luping-
dc.date.accessioned2022-11-03T02:22:30Z-
dc.date.available2022-11-03T02:22:30Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Multimedia, 2021, v. 23, p. 1799-1810-
dc.identifier.issn1520-9210-
dc.identifier.urihttp://hdl.handle.net/10722/321939-
dc.description.abstractRecently, dense video captioning has made attractive progress in detecting and captioning all events in a long untrimmed video. Despite promising results were achieved, most existing methods do not sufficiently explore the scene evolution within an event temporal proposal for captioning, and therefore perform less satisfactorily when the scenes and objects change over a relatively long proposal. To address this problem, we propose a graph-based partition-and-summarization (GPaS) framework for dense video captioning within two stages. For the 'partition' stage, a whole event proposal is split into short video segments for captioning at a finer level. For the 'summarization' stage, the generated sentences carrying rich description information for each segment are summarized into one sentence to describe the whole event. We particularly focus on the 'summarization' stage, and propose a framework that effectively exploits the relationship between semantic words for summarization. We achieve this goal by treating semantic words as the nodes in a graph and learning their interactions by coupling Graph Convolutional Network (GCN) and Long Short Term Memory (LSTM), with the aid of visual cues. Two schemes of GCN-LSTM Interaction (GLI) modules are proposed for seamless integration of GCN and LSTM. The effectiveness of our approach is demonstrated via an extensive comparison with the state-of-the-arts methods on the two benchmarks ActivityNet Captions dataset and YouCook II dataset.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Multimedia-
dc.subjectDense video captioning-
dc.subjectgraph convolutional network-
dc.subjectsentence summarization-
dc.titleDense Video Captioning Using Graph-Based Sentence Summarization-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMM.2020.3003592-
dc.identifier.scopuseid_2-s2.0-85107119077-
dc.identifier.volume23-
dc.identifier.spage1799-
dc.identifier.epage1810-
dc.identifier.eissn1941-0077-
dc.identifier.isiWOS:000655830300025-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats