File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3660348
- Scopus: eid_2-s2.0-85205231891
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation
| Title | Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation |
|---|---|
| Authors | |
| Keywords | automatic camera placement Human robot interaction learning from demonstrations teleoperation |
| Issue Date | 25-Sep-2024 |
| Publisher | Association for Computing Machinery |
| Citation | ACM Transactions on Human-Robot Interaction, 2024, v. 13, n. 3 How to Cite? |
| Abstract | Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation. |
| Persistent Identifier | http://hdl.handle.net/10722/361878 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jia, Ruixing | - |
| dc.contributor.author | Yang, Lei | - |
| dc.contributor.author | Cao, Ying | - |
| dc.contributor.author | Or, Calvin Kalun | - |
| dc.contributor.author | Wang, Wenping | - |
| dc.contributor.author | Jia, Pan | - |
| dc.date.accessioned | 2025-09-17T00:31:31Z | - |
| dc.date.available | 2025-09-17T00:31:31Z | - |
| dc.date.issued | 2024-09-25 | - |
| dc.identifier.citation | ACM Transactions on Human-Robot Interaction, 2024, v. 13, n. 3 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361878 | - |
| dc.description.abstract | Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation. | - |
| dc.language | eng | - |
| dc.publisher | Association for Computing Machinery | - |
| dc.relation.ispartof | ACM Transactions on Human-Robot Interaction | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | automatic camera placement | - |
| dc.subject | Human robot interaction | - |
| dc.subject | learning from demonstrations | - |
| dc.subject | teleoperation | - |
| dc.title | Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1145/3660348 | - |
| dc.identifier.scopus | eid_2-s2.0-85205231891 | - |
| dc.identifier.volume | 13 | - |
| dc.identifier.issue | 3 | - |
| dc.identifier.eissn | 2573-9522 | - |
| dc.identifier.issnl | 2573-9522 | - |
