File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3539618.3591692
- Scopus: eid_2-s2.0-85168655400
- WOS: WOS:001118084000035
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Graph Masked Autoencoder for Sequential Recommendation
| Title | Graph Masked Autoencoder for Sequential Recommendation |
|---|---|
| Authors | |
| Keywords | Graph Neural Networks Masked Autoencoder Self-Supervised Learning Sequential Recommendation |
| Issue Date | 2023 |
| Citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 321-330 How to Cite? |
| Abstract | While some powerful neural network architectures (e.g., Transformer, Graph Neural Networks) have achieved improved performance in sequential recommendation with high-order item dependency modeling, they may suffer from poor representation capability in label scarcity scenarios. To address the issue of insufficient labels, Contrastive Learning (CL) has attracted much attention in recent methods to perform data augmentation through embedding contrasting for self-supervision. However, due to the hand-crafted property of their contrastive view generation strategies, existing CL-enhanced models i) can hardly yield consistent performance on diverse sequential recommendation tasks; ii) may not be immune to user behavior data noise. In light of this, we propose a simple yet effective Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation. It naturally avoids the above issue of heavy reliance on constructing high-quality embedding contrastive views. Instead, an adaptive data reconstruction paradigm is designed to be integrated with the long-range item dependency modeling, for informative augmentation in sequential recommendation. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity. Our implemented model code is available at https://github.com/HKUDS/MAERec. |
| Persistent Identifier | http://hdl.handle.net/10722/355947 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ye, Yaowen | - |
| dc.contributor.author | Xia, Lianghao | - |
| dc.contributor.author | Huang, Chao | - |
| dc.date.accessioned | 2025-05-19T05:46:50Z | - |
| dc.date.available | 2025-05-19T05:46:50Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 321-330 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/355947 | - |
| dc.description.abstract | While some powerful neural network architectures (e.g., Transformer, Graph Neural Networks) have achieved improved performance in sequential recommendation with high-order item dependency modeling, they may suffer from poor representation capability in label scarcity scenarios. To address the issue of insufficient labels, Contrastive Learning (CL) has attracted much attention in recent methods to perform data augmentation through embedding contrasting for self-supervision. However, due to the hand-crafted property of their contrastive view generation strategies, existing CL-enhanced models i) can hardly yield consistent performance on diverse sequential recommendation tasks; ii) may not be immune to user behavior data noise. In light of this, we propose a simple yet effective Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation. It naturally avoids the above issue of heavy reliance on constructing high-quality embedding contrastive views. Instead, an adaptive data reconstruction paradigm is designed to be integrated with the long-range item dependency modeling, for informative augmentation in sequential recommendation. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity. Our implemented model code is available at https://github.com/HKUDS/MAERec. | - |
| dc.language | eng | - |
| dc.relation.ispartof | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval | - |
| dc.subject | Graph Neural Networks | - |
| dc.subject | Masked Autoencoder | - |
| dc.subject | Self-Supervised Learning | - |
| dc.subject | Sequential Recommendation | - |
| dc.title | Graph Masked Autoencoder for Sequential Recommendation | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1145/3539618.3591692 | - |
| dc.identifier.scopus | eid_2-s2.0-85168655400 | - |
| dc.identifier.spage | 321 | - |
| dc.identifier.epage | 330 | - |
| dc.identifier.isi | WOS:001118084000035 | - |
