File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning

TitleAn intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning
Authors
KeywordsBehavior fusion
Fuzzy logic
Goal seeking
Neural network
Obstacle avoidance
Reinforcement learning
Vehicle navigation
Issue Date1999
PublisherIEEE.
Citation
Ieee Transactions On Systems, Man, And Cybernetics, Part B: Cybernetics, 1999, v. 29 n. 2, p. 314-321 How to Cite?
AbstractIn this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behavior and goal seeking behavior to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable. © 1999 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/42817
ISSN
2014 Impact Factor: 6.22
2015 SCImago Journal Rankings: 3.921
ISI Accession Number ID
References

 

DC FieldValueLanguage
dc.contributor.authorYung, NHCen_HK
dc.contributor.authorYe, Cen_HK
dc.date.accessioned2007-03-23T04:32:45Z-
dc.date.available2007-03-23T04:32:45Z-
dc.date.issued1999en_HK
dc.identifier.citationIeee Transactions On Systems, Man, And Cybernetics, Part B: Cybernetics, 1999, v. 29 n. 2, p. 314-321en_HK
dc.identifier.issn1083-4419en_HK
dc.identifier.urihttp://hdl.handle.net/10722/42817-
dc.description.abstractIn this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behavior and goal seeking behavior to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable. © 1999 IEEE.en_HK
dc.format.extent549810 bytes-
dc.format.extent5183 bytes-
dc.format.mimetypeapplication/pdf-
dc.format.mimetypetext/plain-
dc.languageengen_HK
dc.publisherIEEE.en_HK
dc.relation.ispartofIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cyberneticsen_HK
dc.rightsCreative Commons: Attribution 3.0 Hong Kong License-
dc.rights©1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.en_HK
dc.subjectBehavior fusionen_HK
dc.subjectFuzzy logicen_HK
dc.subjectGoal seekingen_HK
dc.subjectNeural networken_HK
dc.subjectObstacle avoidanceen_HK
dc.subjectReinforcement learningen_HK
dc.subjectVehicle navigationen_HK
dc.titleAn intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learningen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=1083-4419&volume=29&issue=2&spage=314&epage=321&date=1999&atitle=An+intelligent+mobile+vehicle+navigator+based+on+fuzzy+logic+and+reinforcement+learningen_HK
dc.identifier.emailYung, NHC:nyung@eee.hku.hken_HK
dc.identifier.authorityYung, NHC=rp00226en_HK
dc.description.naturepublished_or_final_versionen_HK
dc.identifier.doi10.1109/3477.752807en_HK
dc.identifier.scopuseid_2-s2.0-0033115244en_HK
dc.identifier.hkuros45788-
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-0033115244&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume29en_HK
dc.identifier.issue2en_HK
dc.identifier.spage314en_HK
dc.identifier.epage321en_HK
dc.identifier.isiWOS:000079319900019-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridYung, NHC=7003473369en_HK
dc.identifier.scopusauthoridYe, C=7202201245en_HK

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats