File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: UrbanGPT: Spatio-Temporal Large Language Models

TitleUrbanGPT: Spatio-Temporal Large Language Models
Authors
Keywordsgenerative ai
large language models
smart cities
spatial-temporal data mining
urban computing
Issue Date2024
Citation
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2024, p. 5351-5362 How to Cite?
AbstractSpatio-temporal prediction aims to forecast and gain insights into the ever-changing dynamics of urban environments across both time and space. Its purpose is to anticipate future patterns, trends, and events in diverse facets of urban life, including transportation, population movement, and crime rates. Although numerous efforts have been dedicated to developing neural network techniques for accurate predictions on spatio-temporal data, it is important to note that many of these methods heavily depend on having sufficient labeled data to generate precise spatio-temporal representations. Unfortunately, the issue of data scarcity is pervasive in practical urban sensing scenarios. In certain cases, it becomes challenging to collect any labeled data from downstream scenarios, intensifying the problem further. Consequently, it becomes necessary to build a spatio-temporal model that can exhibit strong generalization capabilities across diverse spatio-temporal learning scenarios. Taking inspiration from the remarkable achievements of large language models (LLMs), our objective is to create a spatio-temporal LLM that can exhibit exceptional generalization capabilities across a wide range of downstream urban tasks. To achieve this objective, we present the UrbanGPT, which seamlessly integrates a spatio-temporal dependency encoder with the instruction-tuning paradigm. This integration enables LLMs to comprehend the complex inter-dependencies across time and space, facilitating more comprehensive and accurate predictions under data scarcity. To validate the effectiveness of our approach, we conduct extensive experiments on various public datasets, covering different spatio-temporal prediction tasks. The results consistently demonstrate that our UrbanGPT, with its carefully designed architecture, consistently outperforms state-of-the-art baselines. These findings highlight the potential of building large language models for spatio-temporal learning, particularly in zero-shot scenarios where labeled data is scarce. The code and data are available at: https://github.com/HKUDS/UrbanGPT.
Persistent Identifierhttp://hdl.handle.net/10722/355978
ISSN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, Zhonghang-
dc.contributor.authorXia, Lianghao-
dc.contributor.authorTang, Jiabin-
dc.contributor.authorXu, Yong-
dc.contributor.authorShi, Lei-
dc.contributor.authorXia, Long-
dc.contributor.authorYin, Dawei-
dc.contributor.authorHuang, Chao-
dc.date.accessioned2025-05-19T05:47:02Z-
dc.date.available2025-05-19T05:47:02Z-
dc.date.issued2024-
dc.identifier.citationProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2024, p. 5351-5362-
dc.identifier.issn2154-817X-
dc.identifier.urihttp://hdl.handle.net/10722/355978-
dc.description.abstractSpatio-temporal prediction aims to forecast and gain insights into the ever-changing dynamics of urban environments across both time and space. Its purpose is to anticipate future patterns, trends, and events in diverse facets of urban life, including transportation, population movement, and crime rates. Although numerous efforts have been dedicated to developing neural network techniques for accurate predictions on spatio-temporal data, it is important to note that many of these methods heavily depend on having sufficient labeled data to generate precise spatio-temporal representations. Unfortunately, the issue of data scarcity is pervasive in practical urban sensing scenarios. In certain cases, it becomes challenging to collect any labeled data from downstream scenarios, intensifying the problem further. Consequently, it becomes necessary to build a spatio-temporal model that can exhibit strong generalization capabilities across diverse spatio-temporal learning scenarios. Taking inspiration from the remarkable achievements of large language models (LLMs), our objective is to create a spatio-temporal LLM that can exhibit exceptional generalization capabilities across a wide range of downstream urban tasks. To achieve this objective, we present the UrbanGPT, which seamlessly integrates a spatio-temporal dependency encoder with the instruction-tuning paradigm. This integration enables LLMs to comprehend the complex inter-dependencies across time and space, facilitating more comprehensive and accurate predictions under data scarcity. To validate the effectiveness of our approach, we conduct extensive experiments on various public datasets, covering different spatio-temporal prediction tasks. The results consistently demonstrate that our UrbanGPT, with its carefully designed architecture, consistently outperforms state-of-the-art baselines. These findings highlight the potential of building large language models for spatio-temporal learning, particularly in zero-shot scenarios where labeled data is scarce. The code and data are available at: https://github.com/HKUDS/UrbanGPT.-
dc.languageeng-
dc.relation.ispartofProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining-
dc.subjectgenerative ai-
dc.subjectlarge language models-
dc.subjectsmart cities-
dc.subjectspatial-temporal data mining-
dc.subjecturban computing-
dc.titleUrbanGPT: Spatio-Temporal Large Language Models-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3637528.3671578-
dc.identifier.scopuseid_2-s2.0-85203709974-
dc.identifier.spage5351-
dc.identifier.epage5362-
dc.identifier.isiWOS:001324524205047-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats