File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Seeking in Ride-on-Demand Service: A Reinforcement Learning Model With Dynamic Price Prediction

TitleSeeking in Ride-on-Demand Service: A Reinforcement Learning Model With Dynamic Price Prediction
Authors
KeywordsDriver revenue
dynamic price
reinforcement learning
Ride-on-Demand (RoD)
Issue Date30-May-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Internet of Things Journal, 2024, v. 11, n. 18, p. 29890-29910 How to Cite?
Abstract

Recent years witness the increasing popularity of ride-on-demand (RoD) services such as Uber and Didi. Compared with traditional taxi, RoD service is more “data-driven” and adopts dynamic pricing to manipulate the supply and demand in real time. Dynamic price could be viewed as an accurate and quantitative indicator of the supply and demand, and could provide clues to drivers, passengers, and the service providers, possibly reshaping the ways in which some problems are solved. In this paper, we focus on the seeking route recommendation problem that aims at increasing driver revenue by recommending highly profitable seeking routes to drivers of vacant cars with the help of dynamic prices. We first justify our motivation by showing the importance of route recommendation and answering why it is necessary to consider dynamic prices, based on the analysis of real service data. We then design a dynamic price prediction model to generate the dynamic prices at any given time and location based on multi-source urban data. After that, a reinforcement learning model is adopted to perform seeking route recommendation based on predicted dynamic prices. We conduct extensive experiments in different spatio-temporal combinations and make comparisons with multiple baselines. Results first show that our dynamic price prediction model achieves an accuracy ranging from 83.82% to 90.67% under different settings. It also proves that considering the real-time predicted dynamic prices significantly increases driver revenue by, for example, 12% and 47.5% during weekday evening rush hours, than merely using the average prices or completely ignoring dynamic prices.


Persistent Identifierhttp://hdl.handle.net/10722/346123

 

DC FieldValueLanguage
dc.contributor.authorGuo, Suiming-
dc.contributor.authorDeng, Baoying-
dc.contributor.authorChen, Chao-
dc.contributor.authorKe, Jintao-
dc.contributor.authorWang, Jingyuan-
dc.contributor.authorLong, Saiqin-
dc.contributor.authorXu, Ke-
dc.date.accessioned2024-09-10T00:30:37Z-
dc.date.available2024-09-10T00:30:37Z-
dc.date.issued2024-05-30-
dc.identifier.citationIEEE Internet of Things Journal, 2024, v. 11, n. 18, p. 29890-29910-
dc.identifier.urihttp://hdl.handle.net/10722/346123-
dc.description.abstract<p>Recent years witness the increasing popularity of ride-on-demand (RoD) services such as Uber and Didi. Compared with traditional taxi, RoD service is more “data-driven” and adopts dynamic pricing to manipulate the supply and demand in real time. Dynamic price could be viewed as an accurate and quantitative indicator of the supply and demand, and could provide clues to drivers, passengers, and the service providers, possibly reshaping the ways in which some problems are solved. In this paper, we focus on the seeking route recommendation problem that aims at increasing driver revenue by recommending highly profitable seeking routes to drivers of vacant cars with the help of dynamic prices. We first justify our motivation by showing the importance of route recommendation and answering why it is necessary to consider dynamic prices, based on the analysis of real service data. We then design a dynamic price prediction model to generate the dynamic prices at any given time and location based on multi-source urban data. After that, a reinforcement learning model is adopted to perform seeking route recommendation based on predicted dynamic prices. We conduct extensive experiments in different spatio-temporal combinations and make comparisons with multiple baselines. Results first show that our dynamic price prediction model achieves an accuracy ranging from 83.82% to 90.67% under different settings. It also proves that considering the real-time predicted dynamic prices significantly increases driver revenue by, for example, 12% and 47.5% during weekday evening rush hours, than merely using the average prices or completely ignoring dynamic prices.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Internet of Things Journal-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDriver revenue-
dc.subjectdynamic price-
dc.subjectreinforcement learning-
dc.subjectRide-on-Demand (RoD)-
dc.titleSeeking in Ride-on-Demand Service: A Reinforcement Learning Model With Dynamic Price Prediction-
dc.typeArticle-
dc.identifier.doi10.1109/JIOT.2024.3407119-
dc.identifier.scopuseid_2-s2.0-85194850258-
dc.identifier.volume11-
dc.identifier.issue18-
dc.identifier.spage29890-
dc.identifier.epage29910-
dc.identifier.eissn2327-4662-
dc.identifier.issnl2327-4662-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats