File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Book Chapter: A Reinforcement Learning Framework for Maximizing the Net Present Value of Stochastic Multi-work Packages Project Scheduling Problem

TitleA Reinforcement Learning Framework for Maximizing the Net Present Value of Stochastic Multi-work Packages Project Scheduling Problem
Authors
Issue Date28-Nov-2024
PublisherSpringer
Abstract

Project scheduling to maximize net present value (NPV) poses a significant challenge due to the inherent complexities associated with large-scale projects comprising multiple work packages and uncertain task durations. Existing scheduling methods encounter difficulties in effectively maximizing NPV when confronted with multi-work package projects characterized by stochastic task duration distributions. In light of this problem, this paper proposes a three-level reinforcement learning (TRL) framework aimed at addressing these challenges. To determine resource allocation for each work package within the project, the TRL framework leverages human empirical decision-making at the resource assignment level. At the work package level, a Priority Experience Replay Dueling Double Deep Q-Network (PER-DDDQN) is trained. This PER-DDDQN incorporates a graph embedding method, enabling it to maximize the expected NPV for each work package. The graph embedding method facilitates the determination of the work package's scheduling state, while the PER-DDDQN governs the scheduling of task start times within the work package. Furthermore, at the project level, work packages are scheduled using the same principles employed at the work package level to maximize the expected NPV of the entire project. Numerical experiments conducted on adapted case projects provide evidence that the TRL framework surpasses existing heuristics in achieving higher NPV for most work packages. Moreover, the TRL framework yields a minimum improvement of 26.68% in the maximum expected NPV of the entire project compared to the heuristic method employed in this study. This research contributes significantly to the enhancement of cash flow management in large-scale projects characterized by multiple work packages. Additionally, it opens up possibilities for the integration of reinforcement learning technology within the field of construction project management.


Persistent Identifierhttp://hdl.handle.net/10722/354843
ISBN
ISSN

 

DC FieldValueLanguage
dc.contributor.authorZhang, Yaning-
dc.contributor.authorLi, Xiao-
dc.contributor.authorTeng, Yue-
dc.contributor.authorShen, Qiping-
dc.contributor.authorBai, Sijun-
dc.date.accessioned2025-03-13T00:35:16Z-
dc.date.available2025-03-13T00:35:16Z-
dc.date.issued2024-11-28-
dc.identifier.isbn9789819719488-
dc.identifier.issn2731-040X-
dc.identifier.urihttp://hdl.handle.net/10722/354843-
dc.description.abstract<p>Project scheduling to maximize net present value (NPV) poses a significant challenge due to the inherent complexities associated with large-scale projects comprising multiple work packages and uncertain task durations. Existing scheduling methods encounter difficulties in effectively maximizing NPV when confronted with multi-work package projects characterized by stochastic task duration distributions. In light of this problem, this paper proposes a three-level reinforcement learning (TRL) framework aimed at addressing these challenges. To determine resource allocation for each work package within the project, the TRL framework leverages human empirical decision-making at the resource assignment level. At the work package level, a Priority Experience Replay Dueling Double Deep Q-Network (PER-DDDQN) is trained. This PER-DDDQN incorporates a graph embedding method, enabling it to maximize the expected NPV for each work package. The graph embedding method facilitates the determination of the work package's scheduling state, while the PER-DDDQN governs the scheduling of task start times within the work package. Furthermore, at the project level, work packages are scheduled using the same principles employed at the work package level to maximize the expected NPV of the entire project. Numerical experiments conducted on adapted case projects provide evidence that the TRL framework surpasses existing heuristics in achieving higher NPV for most work packages. Moreover, the TRL framework yields a minimum improvement of 26.68% in the maximum expected NPV of the entire project compared to the heuristic method employed in this study. This research contributes significantly to the enhancement of cash flow management in large-scale projects characterized by multiple work packages. Additionally, it opens up possibilities for the integration of reinforcement learning technology within the field of construction project management.<br></p>-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofCRIOCM 2021: Proceedings of the 26th International Symposium on Advancement of Construction Management and Real Estate-
dc.titleA Reinforcement Learning Framework for Maximizing the Net Present Value of Stochastic Multi-work Packages Project Scheduling Problem-
dc.typeBook_Chapter-
dc.identifier.doi10.1007/978-981-97-1949-5_51-
dc.identifier.spage733-
dc.identifier.epage756-
dc.identifier.eissn2731-0418-
dc.identifier.eisbn9789819719495-
dc.identifier.issnl2731-040X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats