File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Reinforcement Learning in V2I Communication Assisted Autonomous Driving

TitleReinforcement Learning in V2I Communication Assisted Autonomous Driving
Authors
Issue Date2020
Citation
IEEE International Conference on Communications, 2020, v. 2020-June, article no. 9148831 How to Cite?
AbstractA novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-to-infrastructure (V2I) communication networks. To solve this pertinent problem, a double deep Qnetwork (DDQN) algorithm is proposed for making collision-free decisions. Thus, the trajectory and velocity of the AV are determined by receiving real-time traffic information from the base stations (BSs). Compared to the conventional deep Q-network algorithm, the proposed DDQN algorithm is capable of overcoming the large overestimation of action values by decomposing the max-Q-value operation into action selection and action evaluation. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24 of fuel savings over the benchmarks.
Persistent Identifierhttp://hdl.handle.net/10722/349453
ISSN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Xiao-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorWang, Luhan-
dc.contributor.authorLu, Zhaoming-
dc.date.accessioned2024-10-17T06:58:38Z-
dc.date.available2024-10-17T06:58:38Z-
dc.date.issued2020-
dc.identifier.citationIEEE International Conference on Communications, 2020, v. 2020-June, article no. 9148831-
dc.identifier.issn1550-3607-
dc.identifier.urihttp://hdl.handle.net/10722/349453-
dc.description.abstractA novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-to-infrastructure (V2I) communication networks. To solve this pertinent problem, a double deep Qnetwork (DDQN) algorithm is proposed for making collision-free decisions. Thus, the trajectory and velocity of the AV are determined by receiving real-time traffic information from the base stations (BSs). Compared to the conventional deep Q-network algorithm, the proposed DDQN algorithm is capable of overcoming the large overestimation of action values by decomposing the max-Q-value operation into action selection and action evaluation. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24 of fuel savings over the benchmarks.-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Communications-
dc.titleReinforcement Learning in V2I Communication Assisted Autonomous Driving-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICC40277.2020.9148831-
dc.identifier.scopuseid_2-s2.0-85089417069-
dc.identifier.volume2020-June-
dc.identifier.spagearticle no. 9148831-
dc.identifier.epagearticle no. 9148831-
dc.identifier.isiWOS:000606970301070-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats