File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation

TitleLearning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation
Authors
Keywordsautomatic camera placement
Human robot interaction
learning from demonstrations
teleoperation
Issue Date25-Sep-2024
PublisherAssociation for Computing Machinery
Citation
ACM Transactions on Human-Robot Interaction, 2024, v. 13, n. 3 How to Cite?
AbstractTeleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.
Persistent Identifierhttp://hdl.handle.net/10722/361878

 

DC FieldValueLanguage
dc.contributor.authorJia, Ruixing-
dc.contributor.authorYang, Lei-
dc.contributor.authorCao, Ying-
dc.contributor.authorOr, Calvin Kalun-
dc.contributor.authorWang, Wenping-
dc.contributor.authorJia, Pan-
dc.date.accessioned2025-09-17T00:31:31Z-
dc.date.available2025-09-17T00:31:31Z-
dc.date.issued2024-09-25-
dc.identifier.citationACM Transactions on Human-Robot Interaction, 2024, v. 13, n. 3-
dc.identifier.urihttp://hdl.handle.net/10722/361878-
dc.description.abstractTeleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery-
dc.relation.ispartofACM Transactions on Human-Robot Interaction-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectautomatic camera placement-
dc.subjectHuman robot interaction-
dc.subjectlearning from demonstrations-
dc.subjectteleoperation-
dc.titleLearning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation-
dc.typeArticle-
dc.identifier.doi10.1145/3660348-
dc.identifier.scopuseid_2-s2.0-85205231891-
dc.identifier.volume13-
dc.identifier.issue3-
dc.identifier.eissn2573-9522-
dc.identifier.issnl2573-9522-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats