File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep Learning Scooping Motion Using Bilateral Teleoperations

TitleDeep Learning Scooping Motion Using Bilateral Teleoperations
Authors
Issue Date2019
Citation
3rd IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18-20 July 2018. In 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), 2019, p. 118-123 How to Cite?
AbstractWe present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.
Persistent Identifierhttp://hdl.handle.net/10722/308897
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorOchi, Hitoe-
dc.contributor.authorWan, Weiwei-
dc.contributor.authorYang, Yajue-
dc.contributor.authorYamanobe, Natsuki-
dc.contributor.authorPan, Jia-
dc.contributor.authorHarada, Kensuke-
dc.date.accessioned2021-12-08T07:50:21Z-
dc.date.available2021-12-08T07:50:21Z-
dc.date.issued2019-
dc.identifier.citation3rd IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18-20 July 2018. In 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), 2019, p. 118-123-
dc.identifier.urihttp://hdl.handle.net/10722/308897-
dc.description.abstractWe present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.-
dc.languageeng-
dc.relation.ispartof2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM)-
dc.titleDeep Learning Scooping Motion Using Bilateral Teleoperations-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICARM.2018.8610813-
dc.identifier.scopuseid_2-s2.0-85061488661-
dc.identifier.spage118-
dc.identifier.epage123-
dc.identifier.isiWOS:000458327200021-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats