File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/DSC.2019.00018
- Scopus: eid_2-s2.0-85077120789
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Rebalancing the Car-Sharing System: A Reinforcement Learning Method
Title | Rebalancing the Car-Sharing System: A Reinforcement Learning Method |
---|---|
Authors | |
Keywords | Automobiles Learning (artificial intelligence) Urban areas Bicycles Markov processes |
Issue Date | 2019 |
Publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1815424/all-proceedings |
Citation | The 4th IEEE International Conference on Data Science in Cyberspace (IEEE DSC 2019), Hangzhou, China, 23-25 June 2019, p. 62-69 How to Cite? |
Abstract | With the boom of sharing economy, more and more car-sharing corporations sprout up, providing more travel options and convenience. Due to similar travel patterns of urban dwellers, the car-sharing system results in an imbalance of shared cars in spatial distribution, especially during the rush hours. To redress this imbalance faces many challenges, such as insufficient data and the enormous state space. In this study, we propose a new reward method called Double P (Picking & Parking) Bonus (DPB). We model the research problem as a Markov Decision Process (MDP) problem and introduce Deep Deterministic Policy Gradient, a state-of-the-art reinforcement learning framework, to find a solution. The results show that the rewarding mechanism embodied in the DPB method can indeed guide the users' behaviors through price leverage, increase user stickiness, cultivate user habits, and thus boost the service provider's long-term profit. |
Persistent Identifier | http://hdl.handle.net/10722/286405 |
ISBN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Anli, XJ | - |
dc.contributor.author | Ren, CW | - |
dc.contributor.author | Gu, ZQ | - |
dc.contributor.author | Wang, Y | - |
dc.contributor.author | Gao, YJ | - |
dc.date.accessioned | 2020-08-31T07:03:25Z | - |
dc.date.available | 2020-08-31T07:03:25Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | The 4th IEEE International Conference on Data Science in Cyberspace (IEEE DSC 2019), Hangzhou, China, 23-25 June 2019, p. 62-69 | - |
dc.identifier.isbn | 978-1-7281-4529-7 | - |
dc.identifier.uri | http://hdl.handle.net/10722/286405 | - |
dc.description.abstract | With the boom of sharing economy, more and more car-sharing corporations sprout up, providing more travel options and convenience. Due to similar travel patterns of urban dwellers, the car-sharing system results in an imbalance of shared cars in spatial distribution, especially during the rush hours. To redress this imbalance faces many challenges, such as insufficient data and the enormous state space. In this study, we propose a new reward method called Double P (Picking & Parking) Bonus (DPB). We model the research problem as a Markov Decision Process (MDP) problem and introduce Deep Deterministic Policy Gradient, a state-of-the-art reinforcement learning framework, to find a solution. The results show that the rewarding mechanism embodied in the DPB method can indeed guide the users' behaviors through price leverage, increase user stickiness, cultivate user habits, and thus boost the service provider's long-term profit. | - |
dc.language | eng | - |
dc.publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1815424/all-proceedings | - |
dc.relation.ispartof | IEEE International Conference on Data Science in Cyberspace (DSC) | - |
dc.rights | IEEE International Conference on Data Science in Cyberspace (DSC). Copyright © IEEE. | - |
dc.rights | ©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Automobiles | - |
dc.subject | Learning (artificial intelligence) | - |
dc.subject | Urban areas | - |
dc.subject | Bicycles | - |
dc.subject | Markov processes | - |
dc.title | Rebalancing the Car-Sharing System: A Reinforcement Learning Method | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Wang, Y: amywang@hku.hk | - |
dc.identifier.doi | 10.1109/DSC.2019.00018 | - |
dc.identifier.scopus | eid_2-s2.0-85077120789 | - |
dc.identifier.hkuros | 313495 | - |
dc.identifier.spage | 62 | - |
dc.identifier.epage | 69 | - |
dc.publisher.place | United States | - |