File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DiffMM: Multi-Modal Diffusion Model for Recommendation

TitleDiffMM: Multi-Modal Diffusion Model for Recommendation
Authors
Keywordsdiffusion model
multi-modal
recommendation
Issue Date2024
Citation
MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia, 2024, p. 7591-7599 How to Cite?
AbstractThe rise of online multi-modal sharing platforms like TikTok and YouTube has enabled personalized recommender systems to incorporate multiple modalities (such as visual, textual, and acoustic) into user representations. However, addressing the challenge of data sparsity in these systems remains a key issue. To address this limitation, recent research has introduced self-supervised learning techniques to enhance recommender systems. However, these methods often rely on simplistic random augmentation or intuitive cross-view information, which can introduce irrelevant noise and fail to accurately align the multi-modal context with user-item interaction modeling. To fill this research gap, we propose a novel multi-modal graph diffusion model for recommendation called DiffMM. The proposed framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning, better aligning multi-modal feature information with collaborative relation modeling. Our approach leverages diffusion models' generative capabilities to automatically generate a user-item graph that is aware of different modalities, enabling the incorporation of useful multi-modal knowledge in modeling user-item interactions. We conduct extensive experiments on three public datasets, demonstrating the superiority of our DiffMM over various competitive baselines.
Persistent Identifierhttp://hdl.handle.net/10722/355877

 

DC FieldValueLanguage
dc.contributor.authorJiang, Yangqin-
dc.contributor.authorXia, Lianghao-
dc.contributor.authorWei, Wei-
dc.contributor.authorLuo, Da-
dc.contributor.authorLin, Kangyi-
dc.contributor.authorHuang, Chao-
dc.date.accessioned2025-05-19T05:46:21Z-
dc.date.available2025-05-19T05:46:21Z-
dc.date.issued2024-
dc.identifier.citationMM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia, 2024, p. 7591-7599-
dc.identifier.urihttp://hdl.handle.net/10722/355877-
dc.description.abstractThe rise of online multi-modal sharing platforms like TikTok and YouTube has enabled personalized recommender systems to incorporate multiple modalities (such as visual, textual, and acoustic) into user representations. However, addressing the challenge of data sparsity in these systems remains a key issue. To address this limitation, recent research has introduced self-supervised learning techniques to enhance recommender systems. However, these methods often rely on simplistic random augmentation or intuitive cross-view information, which can introduce irrelevant noise and fail to accurately align the multi-modal context with user-item interaction modeling. To fill this research gap, we propose a novel multi-modal graph diffusion model for recommendation called DiffMM. The proposed framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning, better aligning multi-modal feature information with collaborative relation modeling. Our approach leverages diffusion models' generative capabilities to automatically generate a user-item graph that is aware of different modalities, enabling the incorporation of useful multi-modal knowledge in modeling user-item interactions. We conduct extensive experiments on three public datasets, demonstrating the superiority of our DiffMM over various competitive baselines.-
dc.languageeng-
dc.relation.ispartofMM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia-
dc.subjectdiffusion model-
dc.subjectmulti-modal-
dc.subjectrecommendation-
dc.titleDiffMM: Multi-Modal Diffusion Model for Recommendation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3664647.3681498-
dc.identifier.scopuseid_2-s2.0-85209780147-
dc.identifier.spage7591-
dc.identifier.epage7599-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats