File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multimodal sentiment analysis with unimodal label generation and modality decomposition

TitleMultimodal sentiment analysis with unimodal label generation and modality decomposition
Authors
KeywordsModality decomposition
Multimodal sentiment analysis
Unimodal label generation
Issue Date1-Apr-2025
PublisherElsevier
Citation
Information Fusion, 2025, v. 116 How to Cite?
AbstractMultimodal sentiment analysis aims to combine information from different modalities to enhance the understanding of emotions and achieve accurate prediction. However, existing methods face issues of information redundancy and modality heterogeneity during the fusion process, and common multimodal sentiment analysis datasets lack unimodal labels. To address these issues, this paper proposes a multimodal sentiment analysis approach based on unimodal label generation and modality decomposition (ULMD). This method employs a multi-task learning framework, dividing the multimodal sentiment analysis task into a multimodal task and three unimodal tasks. Additionally, a modality representation separator is introduced to decompose modality representations into modality-invariant representations and modality-specific representations. This approach explores the fusion between modalities and generates unimodal labels to enhance the performance of the multimodal sentiment analysis task. Extensive experiments on two public benchmark datasets demonstrate the effectiveness of this method.
Persistent Identifierhttp://hdl.handle.net/10722/353717
ISSN
2023 Impact Factor: 14.7
2023 SCImago Journal Rankings: 5.647
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhu, Linan-
dc.contributor.authorZhao, Hongyan-
dc.contributor.authorZhu, Zhechao-
dc.contributor.authorZhang, Chenwei-
dc.contributor.authorKong, Xiangjie-
dc.date.accessioned2025-01-23T00:35:41Z-
dc.date.available2025-01-23T00:35:41Z-
dc.date.issued2025-04-01-
dc.identifier.citationInformation Fusion, 2025, v. 116-
dc.identifier.issn1566-2535-
dc.identifier.urihttp://hdl.handle.net/10722/353717-
dc.description.abstractMultimodal sentiment analysis aims to combine information from different modalities to enhance the understanding of emotions and achieve accurate prediction. However, existing methods face issues of information redundancy and modality heterogeneity during the fusion process, and common multimodal sentiment analysis datasets lack unimodal labels. To address these issues, this paper proposes a multimodal sentiment analysis approach based on unimodal label generation and modality decomposition (ULMD). This method employs a multi-task learning framework, dividing the multimodal sentiment analysis task into a multimodal task and three unimodal tasks. Additionally, a modality representation separator is introduced to decompose modality representations into modality-invariant representations and modality-specific representations. This approach explores the fusion between modalities and generates unimodal labels to enhance the performance of the multimodal sentiment analysis task. Extensive experiments on two public benchmark datasets demonstrate the effectiveness of this method.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofInformation Fusion-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectModality decomposition-
dc.subjectMultimodal sentiment analysis-
dc.subjectUnimodal label generation-
dc.titleMultimodal sentiment analysis with unimodal label generation and modality decomposition-
dc.typeArticle-
dc.identifier.doi10.1016/j.inffus.2024.102787-
dc.identifier.scopuseid_2-s2.0-85209926549-
dc.identifier.volume116-
dc.identifier.eissn1872-6305-
dc.identifier.isiWOS:001364773800001-
dc.identifier.issnl1566-2535-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats