File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.inffus.2024.102787
- Scopus: eid_2-s2.0-85209926549
- WOS: WOS:001364773800001
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: Multimodal sentiment analysis with unimodal label generation and modality decomposition
| Title | Multimodal sentiment analysis with unimodal label generation and modality decomposition |
|---|---|
| Authors | |
| Keywords | Modality decomposition Multimodal sentiment analysis Unimodal label generation |
| Issue Date | 1-Apr-2025 |
| Publisher | Elsevier |
| Citation | Information Fusion, 2025, v. 116 How to Cite? |
| Abstract | Multimodal sentiment analysis aims to combine information from different modalities to enhance the understanding of emotions and achieve accurate prediction. However, existing methods face issues of information redundancy and modality heterogeneity during the fusion process, and common multimodal sentiment analysis datasets lack unimodal labels. To address these issues, this paper proposes a multimodal sentiment analysis approach based on unimodal label generation and modality decomposition (ULMD). This method employs a multi-task learning framework, dividing the multimodal sentiment analysis task into a multimodal task and three unimodal tasks. Additionally, a modality representation separator is introduced to decompose modality representations into modality-invariant representations and modality-specific representations. This approach explores the fusion between modalities and generates unimodal labels to enhance the performance of the multimodal sentiment analysis task. Extensive experiments on two public benchmark datasets demonstrate the effectiveness of this method. |
| Persistent Identifier | http://hdl.handle.net/10722/353717 |
| ISSN | 2023 Impact Factor: 14.7 2023 SCImago Journal Rankings: 5.647 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhu, Linan | - |
| dc.contributor.author | Zhao, Hongyan | - |
| dc.contributor.author | Zhu, Zhechao | - |
| dc.contributor.author | Zhang, Chenwei | - |
| dc.contributor.author | Kong, Xiangjie | - |
| dc.date.accessioned | 2025-01-23T00:35:41Z | - |
| dc.date.available | 2025-01-23T00:35:41Z | - |
| dc.date.issued | 2025-04-01 | - |
| dc.identifier.citation | Information Fusion, 2025, v. 116 | - |
| dc.identifier.issn | 1566-2535 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/353717 | - |
| dc.description.abstract | Multimodal sentiment analysis aims to combine information from different modalities to enhance the understanding of emotions and achieve accurate prediction. However, existing methods face issues of information redundancy and modality heterogeneity during the fusion process, and common multimodal sentiment analysis datasets lack unimodal labels. To address these issues, this paper proposes a multimodal sentiment analysis approach based on unimodal label generation and modality decomposition (ULMD). This method employs a multi-task learning framework, dividing the multimodal sentiment analysis task into a multimodal task and three unimodal tasks. Additionally, a modality representation separator is introduced to decompose modality representations into modality-invariant representations and modality-specific representations. This approach explores the fusion between modalities and generates unimodal labels to enhance the performance of the multimodal sentiment analysis task. Extensive experiments on two public benchmark datasets demonstrate the effectiveness of this method. | - |
| dc.language | eng | - |
| dc.publisher | Elsevier | - |
| dc.relation.ispartof | Information Fusion | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Modality decomposition | - |
| dc.subject | Multimodal sentiment analysis | - |
| dc.subject | Unimodal label generation | - |
| dc.title | Multimodal sentiment analysis with unimodal label generation and modality decomposition | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1016/j.inffus.2024.102787 | - |
| dc.identifier.scopus | eid_2-s2.0-85209926549 | - |
| dc.identifier.volume | 116 | - |
| dc.identifier.eissn | 1872-6305 | - |
| dc.identifier.isi | WOS:001364773800001 | - |
| dc.identifier.issnl | 1566-2535 | - |
