File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Automated Spatio-Temporal Graph Contrastive Learning

TitleAutomated Spatio-Temporal Graph Contrastive Learning
Authors
KeywordsContrastive learning
Graph neural networks
Self-supervised learning
Spatio-temporal data mining
Urban computing
Issue Date2023
Citation
ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023, 2023, p. 295-305 How to Cite?
AbstractAmong various region embedding methods, graph-based region relation learning models stand out, owing to their strong structure representation ability for encoding spatial correlations with graph neural networks. Despite their effectiveness, several key challenges have not been well addressed in existing methods: i) Data noise and missing are ubiquitous in many spatio-temporal scenarios due to a variety of factors. ii) Input spatio-temporal data (e.g., mobility traces) usually exhibits distribution heterogeneity across space and time. In such cases, current methods are vulnerable to the quality of the generated region graphs, which may lead to suboptimal performance. In this paper, we tackle the above challenges by exploring the Automated Spatio-Temporal graph contrastive learning paradigm (AutoST) over the heterogeneous region graph generated from multi-view data sources. Our AutoST framework is built upon a heterogeneous graph neural architecture to capture the multi-view region dependencies with respect to POI semantics, mobility flow patterns and geographical positions. To improve the robustness of our GNN encoder against data noise and distribution issues, we design an automated spatio-temporal augmentation scheme with a parameterized contrastive view generator. AutoST can adapt to the spatio-temporal heterogeneous graph with multi-view semantics well preserved. Extensive experiments for three downstream spatio-temporal mining tasks on several real-world datasets demonstrate the significant performance gain achieved by our AutoST over a variety of baselines. The code is publicly available at https://github.com/HKUDS/AutoST.
Persistent Identifierhttp://hdl.handle.net/10722/355937

 

DC FieldValueLanguage
dc.contributor.authorZhang, Qianru-
dc.contributor.authorHuang, Chao-
dc.contributor.authorXia, Lianghao-
dc.contributor.authorWang, Zheng-
dc.contributor.authorLi, Zhonghang-
dc.contributor.authorYiu, Siuming-
dc.date.accessioned2025-05-19T05:46:46Z-
dc.date.available2025-05-19T05:46:46Z-
dc.date.issued2023-
dc.identifier.citationACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023, 2023, p. 295-305-
dc.identifier.urihttp://hdl.handle.net/10722/355937-
dc.description.abstractAmong various region embedding methods, graph-based region relation learning models stand out, owing to their strong structure representation ability for encoding spatial correlations with graph neural networks. Despite their effectiveness, several key challenges have not been well addressed in existing methods: i) Data noise and missing are ubiquitous in many spatio-temporal scenarios due to a variety of factors. ii) Input spatio-temporal data (e.g., mobility traces) usually exhibits distribution heterogeneity across space and time. In such cases, current methods are vulnerable to the quality of the generated region graphs, which may lead to suboptimal performance. In this paper, we tackle the above challenges by exploring the Automated Spatio-Temporal graph contrastive learning paradigm (AutoST) over the heterogeneous region graph generated from multi-view data sources. Our AutoST framework is built upon a heterogeneous graph neural architecture to capture the multi-view region dependencies with respect to POI semantics, mobility flow patterns and geographical positions. To improve the robustness of our GNN encoder against data noise and distribution issues, we design an automated spatio-temporal augmentation scheme with a parameterized contrastive view generator. AutoST can adapt to the spatio-temporal heterogeneous graph with multi-view semantics well preserved. Extensive experiments for three downstream spatio-temporal mining tasks on several real-world datasets demonstrate the significant performance gain achieved by our AutoST over a variety of baselines. The code is publicly available at https://github.com/HKUDS/AutoST.-
dc.languageeng-
dc.relation.ispartofACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023-
dc.subjectContrastive learning-
dc.subjectGraph neural networks-
dc.subjectSelf-supervised learning-
dc.subjectSpatio-temporal data mining-
dc.subjectUrban computing-
dc.titleAutomated Spatio-Temporal Graph Contrastive Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3543507.3583304-
dc.identifier.scopuseid_2-s2.0-85159357522-
dc.identifier.spage295-
dc.identifier.epage305-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats