File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.tbs.2025.101171
- Scopus: eid_2-s2.0-105021667849
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Analyzing sequential activity and travel decisions with interpretable deep inverse reinforcement learning
| Title | Analyzing sequential activity and travel decisions with interpretable deep inverse reinforcement learning |
|---|---|
| Authors | |
| Keywords | Activity-based travel demand model Deep learning Explainable artificial intelligence Inverse reinforcement learning |
| Issue Date | 1-Apr-2026 |
| Publisher | Elsevier |
| Citation | Travel Behaviour and Society, 2026, v. 43 How to Cite? |
| Abstract | Travel demand modeling has shifted from aggregated trip-based models to behavior-oriented activity-based models because daily trips are essentially driven by human activities. To analyze the sequential activity-travel decisions, deep inverse reinforcement learning (DIRL) has proven effective in learning the decision mechanisms by approximating a reward function to represent preferences and a policy function to replicate observed behavior using deep neural networks (DNNs). However, most DIRL applications emphasize prediction accuracy and treat the learned functions as black boxes, offering limited behavioral insight. To address this gap, we propose an interpretable DIRL framework that adapts an adversarial IRL approach for modeling sequential activity-travel behavior. Interpretability is achieved in two ways: (1) we distill the learned policy into a surrogate interpretable Multinomial Logit (MNL) model, enabling the extraction of behavioral drivers from model parameters; and (2) we derive short-term rewards and long-term returns from the learned reward function, quantifying immediate preferences and overall decision outcomes across activity sequences. Applied to real-world travel survey data from Singapore, our framework uncovers meaningful behavioral patterns. The MNL-based surrogate model reveals that travel decisions are shaped by activity schedules, travel time, and socio-demographic attributes, particularly employment type. Reward and return analysis distinguish returners with regular patterns from explorers with irregular ones. Regular patterns yield higher long-term returns, while females and elderly individuals exhibit lower returns, indicating disparities in individual activity patterns. These findings bridge the gap between theory-driven behavioral models and data-driven machine learning, offering actionable insights for transport policy and urban planning. |
| Persistent Identifier | http://hdl.handle.net/10722/368275 |
| ISSN | 2023 Impact Factor: 5.1 2023 SCImago Journal Rankings: 1.570 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Liang, Yuebing | - |
| dc.contributor.author | Wang, Shenhao | - |
| dc.contributor.author | Yu, Jiangbo | - |
| dc.contributor.author | Zhao, Zhan | - |
| dc.contributor.author | Zhao, Jinhua | - |
| dc.contributor.author | Pentland, Sandy | - |
| dc.date.accessioned | 2025-12-24T00:37:13Z | - |
| dc.date.available | 2025-12-24T00:37:13Z | - |
| dc.date.issued | 2026-04-01 | - |
| dc.identifier.citation | Travel Behaviour and Society, 2026, v. 43 | - |
| dc.identifier.issn | 2214-367X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/368275 | - |
| dc.description.abstract | <p>Travel demand modeling has shifted from aggregated trip-based models to behavior-oriented activity-based models because daily trips are essentially driven by human activities. To analyze the sequential activity-travel decisions, deep inverse reinforcement learning (DIRL) has proven effective in learning the decision mechanisms by approximating a reward function to represent preferences and a policy function to replicate observed behavior using deep neural networks (DNNs). However, most DIRL applications emphasize prediction accuracy and treat the learned functions as black boxes, offering limited behavioral insight. To address this gap, we propose an interpretable DIRL framework that adapts an adversarial IRL approach for modeling sequential activity-travel behavior. Interpretability is achieved in two ways: (1) we distill the learned policy into a surrogate interpretable Multinomial Logit (MNL) model, enabling the extraction of behavioral drivers from model parameters; and (2) we derive short-term rewards and long-term returns from the learned reward function, quantifying immediate preferences and overall decision outcomes across activity sequences. Applied to real-world travel survey data from Singapore, our framework uncovers meaningful behavioral patterns. The MNL-based surrogate model reveals that travel decisions are shaped by activity schedules, travel time, and socio-demographic attributes, particularly employment type. Reward and return analysis distinguish returners with regular patterns from explorers with irregular ones. Regular patterns yield higher long-term returns, while females and elderly individuals exhibit lower returns, indicating disparities in individual activity patterns. These findings bridge the gap between theory-driven behavioral models and data-driven machine learning, offering actionable insights for transport policy and urban planning.</p> | - |
| dc.language | eng | - |
| dc.publisher | Elsevier | - |
| dc.relation.ispartof | Travel Behaviour and Society | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Activity-based travel demand model | - |
| dc.subject | Deep learning | - |
| dc.subject | Explainable artificial intelligence | - |
| dc.subject | Inverse reinforcement learning | - |
| dc.title | Analyzing sequential activity and travel decisions with interpretable deep inverse reinforcement learning | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1016/j.tbs.2025.101171 | - |
| dc.identifier.scopus | eid_2-s2.0-105021667849 | - |
| dc.identifier.volume | 43 | - |
| dc.identifier.eissn | 2214-3688 | - |
| dc.identifier.issnl | 2214-367X | - |
