File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR52729.2023.00851
- WOS: WOS:001062522101011
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Conference Paper: Command-driven Articulated Object Understanding and Manipulation
| Title | Command-driven Articulated Object Understanding and Manipulation |
|---|---|
| Authors | |
| Issue Date | 22-Aug-2023 |
| Abstract | We present Cart, a new approach towards articulatedobject manipulations by human commands. Beyond the existing work that focuses on inferring articulation structures, we further support manipulating articulated shapes to align them subject to simple command templates. The key of Cart is to utilize the prediction of object structures to connect visual observations with user commands for effective manipulations. It is achieved by encoding command messages for motion prediction and a test-time adaptation to adjust the amount of movement from only command supervision. For a rich variety of object categories, Cart can accurately manipulate object shapes and outperform the state-of-the-art approaches in understanding the inherent articulation structures. Also, it can well generalize to unseen object categories and real-world objects. We hope Cart could open new directions for instructing machines to operate articulated objects. Code is available at https://github.com/dvlab-research/Cart. |
| Persistent Identifier | http://hdl.handle.net/10722/333839 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chu, Ruihang | - |
| dc.contributor.author | Liu, Zhengzhe | - |
| dc.contributor.author | Ye, Xiaoqing | - |
| dc.contributor.author | Tan, Xiao | - |
| dc.contributor.author | Qi, Xiaojuan | - |
| dc.contributor.author | Fu, Chi-Wing | - |
| dc.contributor.author | Jia, Jiaya | - |
| dc.date.accessioned | 2023-10-06T08:39:30Z | - |
| dc.date.available | 2023-10-06T08:39:30Z | - |
| dc.date.issued | 2023-08-22 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/333839 | - |
| dc.description.abstract | <p>We present Cart, a new approach towards articulatedobject manipulations by human commands. Beyond the existing work that focuses on inferring articulation structures, we further support manipulating articulated shapes to align them subject to simple command templates. The key of Cart is to utilize the prediction of object structures to connect visual observations with user commands for effective manipulations. It is achieved by encoding command messages for motion prediction and a test-time adaptation to adjust the amount of movement from only command supervision. For a rich variety of object categories, Cart can accurately manipulate object shapes and outperform the state-of-the-art approaches in understanding the inherent articulation structures. Also, it can well generalize to unseen object categories and real-world objects. We hope Cart could open new directions for instructing machines to operate articulated objects. Code is available at https://github.com/dvlab-research/Cart.<br></p> | - |
| dc.language | eng | - |
| dc.relation.ispartof | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2023-24/06/2023, Vancouver, BC, Canada) | - |
| dc.title | Command-driven Articulated Object Understanding and Manipulation | - |
| dc.type | Conference_Paper | - |
| dc.identifier.doi | 10.1109/CVPR52729.2023.00851 | - |
| dc.identifier.isi | WOS:001062522101011 | - |
