File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3539618.3591723
- Scopus: eid_2-s2.0-85167676757
- WOS: WOS:001118084001073
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Graph Transformer for Recommendation
| Title | Graph Transformer for Recommendation |
|---|---|
| Authors | |
| Keywords | Graph Transformer Masked Autoencoder Recommendation |
| Issue Date | 2023 |
| Citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 1680-1689 How to Cite? |
| Abstract | This paper presents a novel approach to representation learning in recommender systems by integrating generative self-supervised learning with graph transformer architecture. We highlight the importance of high-quality data augmentation with relevant self-supervised pretext tasks for improving performance. Towards this end, we propose a new approach that automates the self-supervision augmentation process through a rationale-aware generative SSL that distills informative user-item interaction patterns. The proposed recommender with Graph TransFormer (GFormer) that offers parameterized collaborative rationale discovery for selective augmentation while preserving global-aware user-item relationships. In GFormer, we allow the rationale-aware SSL to inspire graph collaborative filtering with task-adaptive invariant rationalization in graph transformer. The experimental results reveal that our GFormer has the capability to consistently improve the performance over baselines on different datasets. Several in-depth experiments further investigate the invariant rationale-aware augmentation from various aspects. The source code for this work is publicly available at: https://github.com/HKUDS/GFormer. |
| Persistent Identifier | http://hdl.handle.net/10722/355945 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Li, Chaoliu | - |
| dc.contributor.author | Xia, Lianghao | - |
| dc.contributor.author | Ren, Xubin | - |
| dc.contributor.author | Ye, Yaowen | - |
| dc.contributor.author | Xu, Yong | - |
| dc.contributor.author | Huang, Chao | - |
| dc.date.accessioned | 2025-05-19T05:46:49Z | - |
| dc.date.available | 2025-05-19T05:46:49Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 1680-1689 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/355945 | - |
| dc.description.abstract | This paper presents a novel approach to representation learning in recommender systems by integrating generative self-supervised learning with graph transformer architecture. We highlight the importance of high-quality data augmentation with relevant self-supervised pretext tasks for improving performance. Towards this end, we propose a new approach that automates the self-supervision augmentation process through a rationale-aware generative SSL that distills informative user-item interaction patterns. The proposed recommender with Graph TransFormer (GFormer) that offers parameterized collaborative rationale discovery for selective augmentation while preserving global-aware user-item relationships. In GFormer, we allow the rationale-aware SSL to inspire graph collaborative filtering with task-adaptive invariant rationalization in graph transformer. The experimental results reveal that our GFormer has the capability to consistently improve the performance over baselines on different datasets. Several in-depth experiments further investigate the invariant rationale-aware augmentation from various aspects. The source code for this work is publicly available at: https://github.com/HKUDS/GFormer. | - |
| dc.language | eng | - |
| dc.relation.ispartof | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval | - |
| dc.subject | Graph Transformer | - |
| dc.subject | Masked Autoencoder | - |
| dc.subject | Recommendation | - |
| dc.title | Graph Transformer for Recommendation | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1145/3539618.3591723 | - |
| dc.identifier.scopus | eid_2-s2.0-85167676757 | - |
| dc.identifier.spage | 1680 | - |
| dc.identifier.epage | 1689 | - |
| dc.identifier.isi | WOS:001118084001073 | - |
