File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A multiple-goal reinforcement learning method for complex vehicle overtaking maneuvers

TitleA multiple-goal reinforcement learning method for complex vehicle overtaking maneuvers
Authors
KeywordsArtificial intelligence
learning control systems.
Issue Date2011
PublisherI E E E. The Journal's web site is located at http://www.ewh.ieee.org/tc/its/trans.html
Citation
Ieee Transactions On Intelligent Transportation Systems, 2011, v. 12 n. 2, p. 509-522 How to Cite?
AbstractIn this paper, we present a learning method to solve the vehicle overtaking problem, which demands a multitude of abilities from the agent to tackle multiple criteria. To handle this problem, we propose to adopt a multiple-goal reinforcement learning (MGRL) framework as the basis of our solution. By considering seven different goals, either Q-learning (QL) or double-action QL is employed to determine action decisions based on whether the other vehicles interact with the agent for that particular goal. Furthermore, a fusion function is proposed according to the importance of each goal before arriving to an overall but consistent action decision. This offers a powerful approach for dealing with demanding situations such as overtaking, particularly when a number of other vehicles are within the proximity of the agent and are traveling at different and varying speeds. A large number of overtaking cases have been simulated to demonstrate its effectiveness. From the results, it can be concluded that the proposed method is capable of the following: 1) making correct action decisions for overtaking; 2) avoiding collisions with other vehicles; 3) reaching the target at reasonable time; 4) keeping almost steady speed; and 5) maintaining almost steady heading angle. In addition, it should also be noted that the proposed method performs lane keeping well when not overtaking and lane changing effectively when overtaking is in progress. © 2006 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/137287
ISSN
2021 Impact Factor: 9.551
2020 SCImago Journal Rankings: 1.591
ISI Accession Number ID
Funding AgencyGrant Number
Research Grants Council of the Hong Kong Special Administrative Region, ChinaHKU7194/06E
University of Hong Kong
Funding Information:

Manuscript received March 1, 2008; revised November 24, 2008, September 17, 2009, June 11, 2010, October 12, 2010, and November 2, 2010; accepted December 12, 2010. Date of publication February 7, 2011; date of current version June 6, 2011. This work was supported in part by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China, under Project HKU7194/06E and in part by the Postgraduate Studentship of the University of Hong Kong. The Associate Editor for this paper was M. Brackstone.

References

 

DC FieldValueLanguage
dc.contributor.authorNgai, DCKen_HK
dc.contributor.authorYung, NHCen_HK
dc.date.accessioned2011-08-26T14:22:38Z-
dc.date.available2011-08-26T14:22:38Z-
dc.date.issued2011en_HK
dc.identifier.citationIeee Transactions On Intelligent Transportation Systems, 2011, v. 12 n. 2, p. 509-522en_HK
dc.identifier.issn1524-9050en_HK
dc.identifier.urihttp://hdl.handle.net/10722/137287-
dc.description.abstractIn this paper, we present a learning method to solve the vehicle overtaking problem, which demands a multitude of abilities from the agent to tackle multiple criteria. To handle this problem, we propose to adopt a multiple-goal reinforcement learning (MGRL) framework as the basis of our solution. By considering seven different goals, either Q-learning (QL) or double-action QL is employed to determine action decisions based on whether the other vehicles interact with the agent for that particular goal. Furthermore, a fusion function is proposed according to the importance of each goal before arriving to an overall but consistent action decision. This offers a powerful approach for dealing with demanding situations such as overtaking, particularly when a number of other vehicles are within the proximity of the agent and are traveling at different and varying speeds. A large number of overtaking cases have been simulated to demonstrate its effectiveness. From the results, it can be concluded that the proposed method is capable of the following: 1) making correct action decisions for overtaking; 2) avoiding collisions with other vehicles; 3) reaching the target at reasonable time; 4) keeping almost steady speed; and 5) maintaining almost steady heading angle. In addition, it should also be noted that the proposed method performs lane keeping well when not overtaking and lane changing effectively when overtaking is in progress. © 2006 IEEE.en_HK
dc.languageengen_US
dc.publisherI E E E. The Journal's web site is located at http://www.ewh.ieee.org/tc/its/trans.htmlen_HK
dc.relation.ispartofIEEE Transactions on Intelligent Transportation Systemsen_HK
dc.subjectArtificial intelligenceen_HK
dc.subjectlearning control systems.en_HK
dc.titleA multiple-goal reinforcement learning method for complex vehicle overtaking maneuversen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=1524-9050&volume=12&issue=2&spage=509&epage=522&date=2011&atitle=A+multiple-goal+reinforcement+learning+method+for+complex+vehicle+overtaking+maneuversen_US
dc.identifier.emailYung, NHC:nyung@eee.hku.hken_HK
dc.identifier.authorityYung, NHC=rp00226en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TITS.2011.2106158en_HK
dc.identifier.scopuseid_2-s2.0-79958101813en_HK
dc.identifier.hkuros190963en_US
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-79958101813&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume12en_HK
dc.identifier.issue2en_HK
dc.identifier.spage509en_HK
dc.identifier.epage522en_HK
dc.identifier.isiWOS:000291315100020-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridNgai, DCK=9332358900en_HK
dc.identifier.scopusauthoridYung, NHC=7003473369en_HK
dc.identifier.issnl1524-9050-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats