File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/LRA.2025.3623016
- Scopus: eid_2-s2.0-105019696149
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Multi-Timescale Hierarchical Reinforcement Learning for Unified Behavior and Control of Autonomous Driving
| Title | Multi-Timescale Hierarchical Reinforcement Learning for Unified Behavior and Control of Autonomous Driving |
|---|---|
| Authors | |
| Keywords | autonomous driving motion and path planning multiple timescale Reinforcement learning |
| Issue Date | 17-Oct-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Robotics and Automation Letters, 2025, v. 10, n. 12, p. 12772-12779 How to Cite? |
| Abstract | Reinforcement Learning (RL) is increasingly used in autonomous driving (AD) and shows clear advantages. However, most RL-based AD methods overlook policy structure design. An RL policy that only outputs short-timescale vehicle control commands results in fluctuating driving behavior due to fluctuations in network outputs, while one that only outputs long-timescale driving goals cannot achieve unified optimality of driving behavior and control. Therefore, we propose a multi-timescale hierarchical reinforcement learning approach. Our approach adopts a hierarchical policy structure, where high- and low-level RL policies are unified-trained to produce long-timescale motion guidance and short-timescale control commands, respectively. Therein, motion guidance is explicitly represented by hybrid actions to capture multimodal driving behaviors on structured road and support incremental low-level extend-state updates. Additionally, a hierarchical safety mechanism is designed to ensure multi-timescale safety. Evaluation in simulator-based and HighD dataset-based highway multi-lane scenarios demonstrates that our approach significantly improves AD performance, effectively increasing driving efficiency, action consistency and safety. |
| Persistent Identifier | http://hdl.handle.net/10722/366759 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jin, Guizhe | - |
| dc.contributor.author | Li, Zhuoren | - |
| dc.contributor.author | Leng, Bo | - |
| dc.contributor.author | Yu, Ran | - |
| dc.contributor.author | Xiong, Lu | - |
| dc.contributor.author | Sun, Chen | - |
| dc.date.accessioned | 2025-11-25T04:21:41Z | - |
| dc.date.available | 2025-11-25T04:21:41Z | - |
| dc.date.issued | 2025-10-17 | - |
| dc.identifier.citation | IEEE Robotics and Automation Letters, 2025, v. 10, n. 12, p. 12772-12779 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/366759 | - |
| dc.description.abstract | Reinforcement Learning (RL) is increasingly used in autonomous driving (AD) and shows clear advantages. However, most RL-based AD methods overlook policy structure design. An RL policy that only outputs short-timescale vehicle control commands results in fluctuating driving behavior due to fluctuations in network outputs, while one that only outputs long-timescale driving goals cannot achieve unified optimality of driving behavior and control. Therefore, we propose a multi-timescale hierarchical reinforcement learning approach. Our approach adopts a hierarchical policy structure, where high- and low-level RL policies are unified-trained to produce long-timescale motion guidance and short-timescale control commands, respectively. Therein, motion guidance is explicitly represented by hybrid actions to capture multimodal driving behaviors on structured road and support incremental low-level extend-state updates. Additionally, a hierarchical safety mechanism is designed to ensure multi-timescale safety. Evaluation in simulator-based and HighD dataset-based highway multi-lane scenarios demonstrates that our approach significantly improves AD performance, effectively increasing driving efficiency, action consistency and safety. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Robotics and Automation Letters | - |
| dc.subject | autonomous driving | - |
| dc.subject | motion and path planning | - |
| dc.subject | multiple timescale | - |
| dc.subject | Reinforcement learning | - |
| dc.title | Multi-Timescale Hierarchical Reinforcement Learning for Unified Behavior and Control of Autonomous Driving | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/LRA.2025.3623016 | - |
| dc.identifier.scopus | eid_2-s2.0-105019696149 | - |
| dc.identifier.volume | 10 | - |
| dc.identifier.issue | 12 | - |
| dc.identifier.spage | 12772 | - |
| dc.identifier.epage | 12779 | - |
| dc.identifier.eissn | 2377-3766 | - |
| dc.identifier.issnl | 2377-3766 | - |
