File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance

TitleA fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance
Authors
KeywordsFuzzy system
Obstacle avoidance
Reinforcement learning
Supervised learning
Virtual environment (VE)
Issue Date2003
PublisherIEEE.
Citation
Ieee Transactions On Systems, Man, And Cybernetics, Part B: Cybernetics, 2003, v. 33 n. 1, p. 17-27 How to Cite?
AbstractFuzzy logic system promises an efficient way for obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. Reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs heavy learning phase and may result in an insufficiently learned rule base due to the curse of dimensionality. In this paper, we propose a neural fuzzy system with mixed coarse learning and fine learning phases. In the first phase, supervised learning method is used to determine the membership functions for the input and output variables simultaneously. After sufficient training, fine learning is applied which employs reinforcement learning algorithm to fine-tune the membership functions for the output variables. For sufficient learning, a new learning method using modified Sutton and Barto's model is proposed to strengthen the exploration. Through this two-step tuning approach, the mobile robot is able to perform collision-free navigation. To deal with the difficulty in acquiring large amount of training data with high consistency for the supervised learning, we develop a virtual environment (VE) simulator, which is able to provide desktop virtual environment (DVE) and immersive virtual environment (IVE) visualization. Through operating a mobile robot in the virtual environment (DVE/IVE) by a skilled human operator, the training data are readily obtained and used to train the neural fuzzy system.
Persistent Identifierhttp://hdl.handle.net/10722/42924
ISSN
2014 Impact Factor: 6.22
2015 SCImago Journal Rankings: 3.921
ISI Accession Number ID
References

 

DC FieldValueLanguage
dc.contributor.authorYe, Cen_HK
dc.contributor.authorYung, NHCen_HK
dc.contributor.authorWang, Den_HK
dc.date.accessioned2007-03-23T04:34:50Z-
dc.date.available2007-03-23T04:34:50Z-
dc.date.issued2003en_HK
dc.identifier.citationIeee Transactions On Systems, Man, And Cybernetics, Part B: Cybernetics, 2003, v. 33 n. 1, p. 17-27en_HK
dc.identifier.issn1083-4419en_HK
dc.identifier.urihttp://hdl.handle.net/10722/42924-
dc.description.abstractFuzzy logic system promises an efficient way for obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. Reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs heavy learning phase and may result in an insufficiently learned rule base due to the curse of dimensionality. In this paper, we propose a neural fuzzy system with mixed coarse learning and fine learning phases. In the first phase, supervised learning method is used to determine the membership functions for the input and output variables simultaneously. After sufficient training, fine learning is applied which employs reinforcement learning algorithm to fine-tune the membership functions for the output variables. For sufficient learning, a new learning method using modified Sutton and Barto's model is proposed to strengthen the exploration. Through this two-step tuning approach, the mobile robot is able to perform collision-free navigation. To deal with the difficulty in acquiring large amount of training data with high consistency for the supervised learning, we develop a virtual environment (VE) simulator, which is able to provide desktop virtual environment (DVE) and immersive virtual environment (IVE) visualization. Through operating a mobile robot in the virtual environment (DVE/IVE) by a skilled human operator, the training data are readily obtained and used to train the neural fuzzy system.en_HK
dc.format.extent1020341 bytes-
dc.format.extent5183 bytes-
dc.format.mimetypeapplication/pdf-
dc.format.mimetypetext/plain-
dc.languageengen_HK
dc.publisherIEEE.en_HK
dc.relation.ispartofIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cyberneticsen_HK
dc.rightsCreative Commons: Attribution 3.0 Hong Kong License-
dc.rights©2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.en_HK
dc.subjectFuzzy systemen_HK
dc.subjectObstacle avoidanceen_HK
dc.subjectReinforcement learningen_HK
dc.subjectSupervised learningen_HK
dc.subjectVirtual environment (VE)en_HK
dc.titleA fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidanceen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=1083-4419&volume=33&spage=17&epage=27&date=2003&atitle=A+fuzzy+controller+with+supervised+learning+assisted+reinforcement+learning+algorithm+for+obstacle+avoidanceen_HK
dc.identifier.emailYung, NHC:nyung@eee.hku.hken_HK
dc.identifier.authorityYung, NHC=rp00226en_HK
dc.description.naturepublished_or_final_versionen_HK
dc.identifier.doi10.1109/TSMCB.2003.808179en_HK
dc.identifier.scopuseid_2-s2.0-0037278069en_HK
dc.identifier.hkuros81205-
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-0037278069&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume33en_HK
dc.identifier.issue1en_HK
dc.identifier.spage17en_HK
dc.identifier.epage27en_HK
dc.identifier.isiWOS:000180639100002-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridYe, C=7202201245en_HK
dc.identifier.scopusauthoridYung, NHC=7003473369en_HK
dc.identifier.scopusauthoridWang, D=7407077210en_HK

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats