File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Convolutional Neural Network-Based Visual Servoing for Eye-to-Hand Manipulator

TitleConvolutional Neural Network-Based Visual Servoing for Eye-to-Hand Manipulator
Authors
KeywordsNeural network
Manipulator
Visual servoing
Issue Date2021
Citation
IEEE Access, 2021, v. 9, p. 91820-91835 How to Cite?
AbstractWe propose a CNN based visual servoing scheme for precise positioning of an eye-to-hand manipulator in which the control input of a robot is calculated directly from images by a neural network. In this paper, we propose Difference of Encoded Features driven Interaction matrix Network (DEFINet), a new convolutional neural network (CNN), for eye-to-hand visual servoing. DEFINet estimates a relative pose between desired and current end-effector from desired and current images captured by an eye-to-hand camera. DEFINet includes two branches of the same CNN that share weights and encode target and current images, which is inspired by the architecture of Siamese network. Regression of the relative pose from the difference of the encoded target and current image features leads to a high positioning accuracy of visual servoing using DEFINet. The training dataset is generated from sample data collected by operating a manipulator randomly in task space. The performance of the proposed visual servoing is evaluated through numerical simulation and experiments using a six-DOF industrial manipulator in a real environment. Both simulation and experimental results show the effectiveness of the proposed method.
Persistent Identifierhttp://hdl.handle.net/10722/303042
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTokuda, Fuyuki-
dc.contributor.authorArai, Shogo-
dc.contributor.authorKosuge, Kazuhiro-
dc.date.accessioned2021-09-07T08:43:05Z-
dc.date.available2021-09-07T08:43:05Z-
dc.date.issued2021-
dc.identifier.citationIEEE Access, 2021, v. 9, p. 91820-91835-
dc.identifier.urihttp://hdl.handle.net/10722/303042-
dc.description.abstractWe propose a CNN based visual servoing scheme for precise positioning of an eye-to-hand manipulator in which the control input of a robot is calculated directly from images by a neural network. In this paper, we propose Difference of Encoded Features driven Interaction matrix Network (DEFINet), a new convolutional neural network (CNN), for eye-to-hand visual servoing. DEFINet estimates a relative pose between desired and current end-effector from desired and current images captured by an eye-to-hand camera. DEFINet includes two branches of the same CNN that share weights and encode target and current images, which is inspired by the architecture of Siamese network. Regression of the relative pose from the difference of the encoded target and current image features leads to a high positioning accuracy of visual servoing using DEFINet. The training dataset is generated from sample data collected by operating a manipulator randomly in task space. The performance of the proposed visual servoing is evaluated through numerical simulation and experiments using a six-DOF industrial manipulator in a real environment. Both simulation and experimental results show the effectiveness of the proposed method.-
dc.languageeng-
dc.relation.ispartofIEEE Access-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectNeural network-
dc.subjectManipulator-
dc.subjectVisual servoing-
dc.titleConvolutional Neural Network-Based Visual Servoing for Eye-to-Hand Manipulator-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/ACCESS.2021.3091737-
dc.identifier.scopuseid_2-s2.0-85112171423-
dc.identifier.hkuros328272-
dc.identifier.volume9-
dc.identifier.spage91820-
dc.identifier.epage91835-
dc.identifier.eissn2169-3536-
dc.identifier.isiWOS:000673810800001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats