File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Adaptive rescheduling of rail transit services with short-turnings under disruptions via a multi-agent deep reinforcement learning approach

TitleAdaptive rescheduling of rail transit services with short-turnings under disruptions via a multi-agent deep reinforcement learning approach
Authors
KeywordsMarkov decision process
Multi-agent deep reinforcement learning
Proximal policy optimization
Short-turning
Train rescheduling
Issue Date1-Oct-2024
PublisherElsevier
Citation
Transportation Research Part B: Methodological, 2024, v. 188 How to Cite?
AbstractThis paper presents a novel multi-agent deep reinforcement learning (MADRL) approach for real-time rescheduling of rail transit services with short-turnings during a complete track blockage on a double-track service corridor. The optimization problem is modeled as a Markov decision process with multiple control agents rescheduling train services on each directional line for system recovery. To ensure computational efficacy, we employ a multi-agent policy optimization solution framework in which each control agent employs a decentralized policy function for deriving local decisions and a centralized value function approximation (VFA) estimating global system state values. Both the policy functions and VFAs are represented by multi-layer artificial neural networks (ANNs). A multi-agent proximal policy optimization gradient algorithm is developed for training the policies and VFAs through iterative simulated system transitions. The proposed framework is implemented and tested with real-world scenarios with data collected from London Underground, UK. Computational results demonstrate the superiority of the developed framework in computational effectiveness compared with previous distributed control algorithms and conventional metaheuristic methods. We also provide managerial implications for train rescheduling during disruptions with different durations, locations, and passenger behaviors. Additional experiments show the scalability of the proposed MADRL framework in managing disruptions with uncertain durations with a generalized model. This study contributes to real-time rail transit management with innovative control and optimization techniques.
Persistent Identifierhttp://hdl.handle.net/10722/352729
ISSN
2023 Impact Factor: 5.8
2023 SCImago Journal Rankings: 2.660
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYing, Chengshuo-
dc.contributor.authorChow, Andy H.F.-
dc.contributor.authorYan, Yimo-
dc.contributor.authorKuo, Yong Hong-
dc.contributor.authorWang, Shouyang-
dc.date.accessioned2024-12-27T00:35:10Z-
dc.date.available2024-12-27T00:35:10Z-
dc.date.issued2024-10-01-
dc.identifier.citationTransportation Research Part B: Methodological, 2024, v. 188-
dc.identifier.issn0191-2615-
dc.identifier.urihttp://hdl.handle.net/10722/352729-
dc.description.abstractThis paper presents a novel multi-agent deep reinforcement learning (MADRL) approach for real-time rescheduling of rail transit services with short-turnings during a complete track blockage on a double-track service corridor. The optimization problem is modeled as a Markov decision process with multiple control agents rescheduling train services on each directional line for system recovery. To ensure computational efficacy, we employ a multi-agent policy optimization solution framework in which each control agent employs a decentralized policy function for deriving local decisions and a centralized value function approximation (VFA) estimating global system state values. Both the policy functions and VFAs are represented by multi-layer artificial neural networks (ANNs). A multi-agent proximal policy optimization gradient algorithm is developed for training the policies and VFAs through iterative simulated system transitions. The proposed framework is implemented and tested with real-world scenarios with data collected from London Underground, UK. Computational results demonstrate the superiority of the developed framework in computational effectiveness compared with previous distributed control algorithms and conventional metaheuristic methods. We also provide managerial implications for train rescheduling during disruptions with different durations, locations, and passenger behaviors. Additional experiments show the scalability of the proposed MADRL framework in managing disruptions with uncertain durations with a generalized model. This study contributes to real-time rail transit management with innovative control and optimization techniques.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofTransportation Research Part B: Methodological-
dc.subjectMarkov decision process-
dc.subjectMulti-agent deep reinforcement learning-
dc.subjectProximal policy optimization-
dc.subjectShort-turning-
dc.subjectTrain rescheduling-
dc.titleAdaptive rescheduling of rail transit services with short-turnings under disruptions via a multi-agent deep reinforcement learning approach-
dc.typeArticle-
dc.identifier.doi10.1016/j.trb.2024.103067-
dc.identifier.scopuseid_2-s2.0-85203005968-
dc.identifier.volume188-
dc.identifier.eissn1879-2367-
dc.identifier.isiWOS:001308245400001-
dc.identifier.issnl0191-2615-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats