File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TITS.2011.2106158
- Scopus: eid_2-s2.0-79958101813
- WOS: WOS:000291315100020
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: A multiple-goal reinforcement learning method for complex vehicle overtaking maneuvers
Title | A multiple-goal reinforcement learning method for complex vehicle overtaking maneuvers | ||||||
---|---|---|---|---|---|---|---|
Authors | |||||||
Keywords | Artificial intelligence learning control systems. | ||||||
Issue Date | 2011 | ||||||
Publisher | I E E E. The Journal's web site is located at http://www.ewh.ieee.org/tc/its/trans.html | ||||||
Citation | Ieee Transactions On Intelligent Transportation Systems, 2011, v. 12 n. 2, p. 509-522 How to Cite? | ||||||
Abstract | In this paper, we present a learning method to solve the vehicle overtaking problem, which demands a multitude of abilities from the agent to tackle multiple criteria. To handle this problem, we propose to adopt a multiple-goal reinforcement learning (MGRL) framework as the basis of our solution. By considering seven different goals, either Q-learning (QL) or double-action QL is employed to determine action decisions based on whether the other vehicles interact with the agent for that particular goal. Furthermore, a fusion function is proposed according to the importance of each goal before arriving to an overall but consistent action decision. This offers a powerful approach for dealing with demanding situations such as overtaking, particularly when a number of other vehicles are within the proximity of the agent and are traveling at different and varying speeds. A large number of overtaking cases have been simulated to demonstrate its effectiveness. From the results, it can be concluded that the proposed method is capable of the following: 1) making correct action decisions for overtaking; 2) avoiding collisions with other vehicles; 3) reaching the target at reasonable time; 4) keeping almost steady speed; and 5) maintaining almost steady heading angle. In addition, it should also be noted that the proposed method performs lane keeping well when not overtaking and lane changing effectively when overtaking is in progress. © 2006 IEEE. | ||||||
Persistent Identifier | http://hdl.handle.net/10722/137287 | ||||||
ISSN | 2023 Impact Factor: 7.9 2023 SCImago Journal Rankings: 2.580 | ||||||
ISI Accession Number ID |
Funding Information: Manuscript received March 1, 2008; revised November 24, 2008, September 17, 2009, June 11, 2010, October 12, 2010, and November 2, 2010; accepted December 12, 2010. Date of publication February 7, 2011; date of current version June 6, 2011. This work was supported in part by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China, under Project HKU7194/06E and in part by the Postgraduate Studentship of the University of Hong Kong. The Associate Editor for this paper was M. Brackstone. | ||||||
References |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ngai, DCK | en_HK |
dc.contributor.author | Yung, NHC | en_HK |
dc.date.accessioned | 2011-08-26T14:22:38Z | - |
dc.date.available | 2011-08-26T14:22:38Z | - |
dc.date.issued | 2011 | en_HK |
dc.identifier.citation | Ieee Transactions On Intelligent Transportation Systems, 2011, v. 12 n. 2, p. 509-522 | en_HK |
dc.identifier.issn | 1524-9050 | en_HK |
dc.identifier.uri | http://hdl.handle.net/10722/137287 | - |
dc.description.abstract | In this paper, we present a learning method to solve the vehicle overtaking problem, which demands a multitude of abilities from the agent to tackle multiple criteria. To handle this problem, we propose to adopt a multiple-goal reinforcement learning (MGRL) framework as the basis of our solution. By considering seven different goals, either Q-learning (QL) or double-action QL is employed to determine action decisions based on whether the other vehicles interact with the agent for that particular goal. Furthermore, a fusion function is proposed according to the importance of each goal before arriving to an overall but consistent action decision. This offers a powerful approach for dealing with demanding situations such as overtaking, particularly when a number of other vehicles are within the proximity of the agent and are traveling at different and varying speeds. A large number of overtaking cases have been simulated to demonstrate its effectiveness. From the results, it can be concluded that the proposed method is capable of the following: 1) making correct action decisions for overtaking; 2) avoiding collisions with other vehicles; 3) reaching the target at reasonable time; 4) keeping almost steady speed; and 5) maintaining almost steady heading angle. In addition, it should also be noted that the proposed method performs lane keeping well when not overtaking and lane changing effectively when overtaking is in progress. © 2006 IEEE. | en_HK |
dc.language | eng | en_US |
dc.publisher | I E E E. The Journal's web site is located at http://www.ewh.ieee.org/tc/its/trans.html | en_HK |
dc.relation.ispartof | IEEE Transactions on Intelligent Transportation Systems | en_HK |
dc.subject | Artificial intelligence | en_HK |
dc.subject | learning control systems. | en_HK |
dc.title | A multiple-goal reinforcement learning method for complex vehicle overtaking maneuvers | en_HK |
dc.type | Article | en_HK |
dc.identifier.openurl | http://library.hku.hk:4550/resserv?sid=HKU:IR&issn=1524-9050&volume=12&issue=2&spage=509&epage=522&date=2011&atitle=A+multiple-goal+reinforcement+learning+method+for+complex+vehicle+overtaking+maneuvers | en_US |
dc.identifier.email | Yung, NHC:nyung@eee.hku.hk | en_HK |
dc.identifier.authority | Yung, NHC=rp00226 | en_HK |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TITS.2011.2106158 | en_HK |
dc.identifier.scopus | eid_2-s2.0-79958101813 | en_HK |
dc.identifier.hkuros | 190963 | en_US |
dc.relation.references | http://www.scopus.com/mlt/select.url?eid=2-s2.0-79958101813&selection=ref&src=s&origin=recordpage | en_HK |
dc.identifier.volume | 12 | en_HK |
dc.identifier.issue | 2 | en_HK |
dc.identifier.spage | 509 | en_HK |
dc.identifier.epage | 522 | en_HK |
dc.identifier.isi | WOS:000291315100020 | - |
dc.publisher.place | United States | en_HK |
dc.identifier.scopusauthorid | Ngai, DCK=9332358900 | en_HK |
dc.identifier.scopusauthorid | Yung, NHC=7003473369 | en_HK |
dc.identifier.issnl | 1524-9050 | - |