File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Adaptive state space partitioning for reinforcement learning

TitleAdaptive state space partitioning for reinforcement learning
Authors
KeywordsNavigation
Nearest neighbor quantizer
Peg-in-hole
Reinforcement learning
State space partitioning
Issue Date2004
PublisherElsevier Ltd. The Journal's web site is located at http://www.elsevier.com/locate/engappai
Citation
Engineering Applications Of Artificial Intelligence, 2004, v. 17 n. 6, p. 577-588 How to Cite?
AbstractThe convergence property of reinforcement learning has been extensively investigated in the field of machine learning, however, its applications to real-world problems are still constrained due to its computational complexity. A novel algorithm to improve the applicability and efficacy of reinforcement learning algorithms via adaptive state space partitioning is presented. The proposed temporal difference learning with adaptive vector quantization (TD-AVQ) is an online algorithm and does not assume any a priori knowledge with respect to the learning task and environment. It utilizes the information generated from the reinforcement learning algorithms. Therefore, no additional computations on the decisions of how to partition a particular state space are required. A series of simulations are provided to demonstrate the practical values and performance of the proposed algorithms in solving robot motion planning problems. © 2004 Elsevier Ltd. All rights reserved.
Persistent Identifierhttp://hdl.handle.net/10722/74293
ISSN
2015 Impact Factor: 2.368
2015 SCImago Journal Rankings: 1.371
ISI Accession Number ID
References

 

DC FieldValueLanguage
dc.contributor.authorLee, ISKen_HK
dc.contributor.authorLau, HYKen_HK
dc.date.accessioned2010-09-06T06:59:51Z-
dc.date.available2010-09-06T06:59:51Z-
dc.date.issued2004en_HK
dc.identifier.citationEngineering Applications Of Artificial Intelligence, 2004, v. 17 n. 6, p. 577-588en_HK
dc.identifier.issn0952-1976en_HK
dc.identifier.urihttp://hdl.handle.net/10722/74293-
dc.description.abstractThe convergence property of reinforcement learning has been extensively investigated in the field of machine learning, however, its applications to real-world problems are still constrained due to its computational complexity. A novel algorithm to improve the applicability and efficacy of reinforcement learning algorithms via adaptive state space partitioning is presented. The proposed temporal difference learning with adaptive vector quantization (TD-AVQ) is an online algorithm and does not assume any a priori knowledge with respect to the learning task and environment. It utilizes the information generated from the reinforcement learning algorithms. Therefore, no additional computations on the decisions of how to partition a particular state space are required. A series of simulations are provided to demonstrate the practical values and performance of the proposed algorithms in solving robot motion planning problems. © 2004 Elsevier Ltd. All rights reserved.en_HK
dc.languageengen_HK
dc.publisherElsevier Ltd. The Journal's web site is located at http://www.elsevier.com/locate/engappaien_HK
dc.relation.ispartofEngineering Applications of Artificial Intelligenceen_HK
dc.subjectNavigationen_HK
dc.subjectNearest neighbor quantizeren_HK
dc.subjectPeg-in-holeen_HK
dc.subjectReinforcement learningen_HK
dc.subjectState space partitioningen_HK
dc.titleAdaptive state space partitioning for reinforcement learningen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=0952-1976&volume=17&issue=6&spage=289&epage=312&date=2005&atitle=Adaptive+state+space+partitioning+for+reinforcement+learningen_HK
dc.identifier.emailLau, HYK:hyklau@hkucc.hku.hken_HK
dc.identifier.authorityLau, HYK=rp00137en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.engappai.2004.08.005en_HK
dc.identifier.scopuseid_2-s2.0-5444230072en_HK
dc.identifier.hkuros103233en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-5444230072&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume17en_HK
dc.identifier.issue6en_HK
dc.identifier.spage577en_HK
dc.identifier.epage588en_HK
dc.identifier.isiWOS:000224909500002-
dc.publisher.placeUnited Kingdomen_HK
dc.identifier.scopusauthoridLee, ISK=26663339300en_HK
dc.identifier.scopusauthoridLau, HYK=7201497761en_HK
dc.identifier.citeulike1723595-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats