File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TMM.2020.3003592
- Scopus: eid_2-s2.0-85107119077
- WOS: WOS:000655830300025
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Dense Video Captioning Using Graph-Based Sentence Summarization
Title | Dense Video Captioning Using Graph-Based Sentence Summarization |
---|---|
Authors | |
Keywords | Dense video captioning graph convolutional network sentence summarization |
Issue Date | 2021 |
Citation | IEEE Transactions on Multimedia, 2021, v. 23, p. 1799-1810 How to Cite? |
Abstract | Recently, dense video captioning has made attractive progress in detecting and captioning all events in a long untrimmed video. Despite promising results were achieved, most existing methods do not sufficiently explore the scene evolution within an event temporal proposal for captioning, and therefore perform less satisfactorily when the scenes and objects change over a relatively long proposal. To address this problem, we propose a graph-based partition-and-summarization (GPaS) framework for dense video captioning within two stages. For the 'partition' stage, a whole event proposal is split into short video segments for captioning at a finer level. For the 'summarization' stage, the generated sentences carrying rich description information for each segment are summarized into one sentence to describe the whole event. We particularly focus on the 'summarization' stage, and propose a framework that effectively exploits the relationship between semantic words for summarization. We achieve this goal by treating semantic words as the nodes in a graph and learning their interactions by coupling Graph Convolutional Network (GCN) and Long Short Term Memory (LSTM), with the aid of visual cues. Two schemes of GCN-LSTM Interaction (GLI) modules are proposed for seamless integration of GCN and LSTM. The effectiveness of our approach is demonstrated via an extensive comparison with the state-of-the-arts methods on the two benchmarks ActivityNet Captions dataset and YouCook II dataset. |
Persistent Identifier | http://hdl.handle.net/10722/321939 |
ISSN | 2023 Impact Factor: 8.4 2023 SCImago Journal Rankings: 2.260 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Zhiwang | - |
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Ouyang, Wanli | - |
dc.contributor.author | Zhou, Luping | - |
dc.date.accessioned | 2022-11-03T02:22:30Z | - |
dc.date.available | 2022-11-03T02:22:30Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Transactions on Multimedia, 2021, v. 23, p. 1799-1810 | - |
dc.identifier.issn | 1520-9210 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321939 | - |
dc.description.abstract | Recently, dense video captioning has made attractive progress in detecting and captioning all events in a long untrimmed video. Despite promising results were achieved, most existing methods do not sufficiently explore the scene evolution within an event temporal proposal for captioning, and therefore perform less satisfactorily when the scenes and objects change over a relatively long proposal. To address this problem, we propose a graph-based partition-and-summarization (GPaS) framework for dense video captioning within two stages. For the 'partition' stage, a whole event proposal is split into short video segments for captioning at a finer level. For the 'summarization' stage, the generated sentences carrying rich description information for each segment are summarized into one sentence to describe the whole event. We particularly focus on the 'summarization' stage, and propose a framework that effectively exploits the relationship between semantic words for summarization. We achieve this goal by treating semantic words as the nodes in a graph and learning their interactions by coupling Graph Convolutional Network (GCN) and Long Short Term Memory (LSTM), with the aid of visual cues. Two schemes of GCN-LSTM Interaction (GLI) modules are proposed for seamless integration of GCN and LSTM. The effectiveness of our approach is demonstrated via an extensive comparison with the state-of-the-arts methods on the two benchmarks ActivityNet Captions dataset and YouCook II dataset. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Multimedia | - |
dc.subject | Dense video captioning | - |
dc.subject | graph convolutional network | - |
dc.subject | sentence summarization | - |
dc.title | Dense Video Captioning Using Graph-Based Sentence Summarization | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TMM.2020.3003592 | - |
dc.identifier.scopus | eid_2-s2.0-85107119077 | - |
dc.identifier.volume | 23 | - |
dc.identifier.spage | 1799 | - |
dc.identifier.epage | 1810 | - |
dc.identifier.eissn | 1941-0077 | - |
dc.identifier.isi | WOS:000655830300025 | - |