File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Show, Tell and Summarize: Dense Video Captioning Using Visual Cue Aided Sentence Summarization

TitleShow, Tell and Summarize: Dense Video Captioning Using Visual Cue Aided Sentence Summarization
Authors
KeywordsDense video captioning
hierarchical attention mechanism
sentence summarization
Issue Date2020
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2020, v. 30, n. 9, p. 3130-3139 How to Cite?
AbstractIn this work, we propose a division-and-summarization (DaS) framework for dense video captioning. After partitioning each untrimmed long video as multiple event proposals, where each event proposal consists of a set of short video segments, we extract visual feature (e.g., C3D feature) from each segment and use the existing image/video captioning approach to generate one sentence description for this segment. Considering that the generated sentences contain rich semantic descriptions about the whole event proposal, we formulate the dense video captioning task as a visual cue aided sentence summarization problem and propose a new two stage Long Short Term Memory (LSTM) approach equipped with a new hierarchical attention mechanism to summarize all generated sentences as one descriptive sentence with the aid of visual features. Specifically, the first-stage LSTM network takes all semantic words from the generated sentences and the visual features from all segments within one event proposal as the input, and acts as the encoder to effectively summarize both semantic and visual information related to this event proposal. The second-stage LSTM network takes the output from the first-stage LSTM network and the visual features from all video segments within one event proposal as the input, and acts as the decoder to generate one descriptive sentence for this event proposal. Our comprehensive experiments on the ActivityNet Captions dataset demonstrate the effectiveness of our newly proposed DaS framework for dense video captioning.
Persistent Identifierhttp://hdl.handle.net/10722/322025
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Zhiwang-
dc.contributor.authorXu, Dong-
dc.contributor.authorOuyang, Wanli-
dc.contributor.authorTan, Chuanqi-
dc.date.accessioned2022-11-03T02:23:05Z-
dc.date.available2022-11-03T02:23:05Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2020, v. 30, n. 9, p. 3130-3139-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/322025-
dc.description.abstractIn this work, we propose a division-and-summarization (DaS) framework for dense video captioning. After partitioning each untrimmed long video as multiple event proposals, where each event proposal consists of a set of short video segments, we extract visual feature (e.g., C3D feature) from each segment and use the existing image/video captioning approach to generate one sentence description for this segment. Considering that the generated sentences contain rich semantic descriptions about the whole event proposal, we formulate the dense video captioning task as a visual cue aided sentence summarization problem and propose a new two stage Long Short Term Memory (LSTM) approach equipped with a new hierarchical attention mechanism to summarize all generated sentences as one descriptive sentence with the aid of visual features. Specifically, the first-stage LSTM network takes all semantic words from the generated sentences and the visual features from all segments within one event proposal as the input, and acts as the encoder to effectively summarize both semantic and visual information related to this event proposal. The second-stage LSTM network takes the output from the first-stage LSTM network and the visual features from all video segments within one event proposal as the input, and acts as the decoder to generate one descriptive sentence for this event proposal. Our comprehensive experiments on the ActivityNet Captions dataset demonstrate the effectiveness of our newly proposed DaS framework for dense video captioning.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectDense video captioning-
dc.subjecthierarchical attention mechanism-
dc.subjectsentence summarization-
dc.titleShow, Tell and Summarize: Dense Video Captioning Using Visual Cue Aided Sentence Summarization-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2019.2936526-
dc.identifier.scopuseid_2-s2.0-85091222834-
dc.identifier.volume30-
dc.identifier.issue9-
dc.identifier.spage3130-
dc.identifier.epage3139-
dc.identifier.eissn1558-2205-
dc.identifier.isiWOS:000567499300027-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats