File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Rebalancing the Car-Sharing System: A Reinforcement Learning Method

TitleRebalancing the Car-Sharing System: A Reinforcement Learning Method
Authors
KeywordsAutomobiles
Learning (artificial intelligence)
Urban areas
Bicycles
Markov processes
Issue Date2019
PublisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1815424/all-proceedings
Citation
The 4th IEEE International Conference on Data Science in Cyberspace (IEEE DSC 2019), Hangzhou, China, 23-25 June 2019, p. 62-69 How to Cite?
AbstractWith the boom of sharing economy, more and more car-sharing corporations sprout up, providing more travel options and convenience. Due to similar travel patterns of urban dwellers, the car-sharing system results in an imbalance of shared cars in spatial distribution, especially during the rush hours. To redress this imbalance faces many challenges, such as insufficient data and the enormous state space. In this study, we propose a new reward method called Double P (Picking & Parking) Bonus (DPB). We model the research problem as a Markov Decision Process (MDP) problem and introduce Deep Deterministic Policy Gradient, a state-of-the-art reinforcement learning framework, to find a solution. The results show that the rewarding mechanism embodied in the DPB method can indeed guide the users' behaviors through price leverage, increase user stickiness, cultivate user habits, and thus boost the service provider's long-term profit.
Persistent Identifierhttp://hdl.handle.net/10722/286405
ISBN

 

DC FieldValueLanguage
dc.contributor.authorAnli, XJ-
dc.contributor.authorRen, CW-
dc.contributor.authorGu, ZQ-
dc.contributor.authorWang, Y-
dc.contributor.authorGao, YJ-
dc.date.accessioned2020-08-31T07:03:25Z-
dc.date.available2020-08-31T07:03:25Z-
dc.date.issued2019-
dc.identifier.citationThe 4th IEEE International Conference on Data Science in Cyberspace (IEEE DSC 2019), Hangzhou, China, 23-25 June 2019, p. 62-69-
dc.identifier.isbn978-1-7281-4529-7-
dc.identifier.urihttp://hdl.handle.net/10722/286405-
dc.description.abstractWith the boom of sharing economy, more and more car-sharing corporations sprout up, providing more travel options and convenience. Due to similar travel patterns of urban dwellers, the car-sharing system results in an imbalance of shared cars in spatial distribution, especially during the rush hours. To redress this imbalance faces many challenges, such as insufficient data and the enormous state space. In this study, we propose a new reward method called Double P (Picking & Parking) Bonus (DPB). We model the research problem as a Markov Decision Process (MDP) problem and introduce Deep Deterministic Policy Gradient, a state-of-the-art reinforcement learning framework, to find a solution. The results show that the rewarding mechanism embodied in the DPB method can indeed guide the users' behaviors through price leverage, increase user stickiness, cultivate user habits, and thus boost the service provider's long-term profit.-
dc.languageeng-
dc.publisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1815424/all-proceedings-
dc.relation.ispartofIEEE International Conference on Data Science in Cyberspace (DSC)-
dc.rightsIEEE International Conference on Data Science in Cyberspace (DSC). Copyright © IEEE.-
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectAutomobiles-
dc.subjectLearning (artificial intelligence)-
dc.subjectUrban areas-
dc.subjectBicycles-
dc.subjectMarkov processes-
dc.titleRebalancing the Car-Sharing System: A Reinforcement Learning Method-
dc.typeConference_Paper-
dc.identifier.emailWang, Y: amywang@hku.hk-
dc.identifier.doi10.1109/DSC.2019.00018-
dc.identifier.scopuseid_2-s2.0-85077120789-
dc.identifier.hkuros313495-
dc.identifier.spage62-
dc.identifier.epage69-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats