File Download
Supplementary

postgraduate thesis: Rigid-soft interactive learning for robotic manipulation

TitleRigid-soft interactive learning for robotic manipulation
Authors
Issue Date2024
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Yang, L. [楊林瀚]. (2024). Rigid-soft interactive learning for robotic manipulation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractRecent years have witnessed significant advancements in the field of robotic manipulation through the adoption of machine learning methods. Unlike other domains such as computer vision and natural language processing, robotic manipulation involves complex physical interactions that pose substantial challenges for developing scalable and generalized control policies. In this thesis, we explore the understanding and represnetation learning of these interactions across various robotic manipulation scenarios. We classify these interactions into two categories: Internal Interactions between the manipulator (gripper or robot) and the objects, and External Interactions involving the objects/robots and their external environments. Focusing on the internal interactions, we initially investigate a grasp prediction task. We change the variables such as gripper stiffness (rigid or soft fingers) and the type of grasp (power or precision), which implicitly encodes interaction data within our dataset. Our experiments reveal that this configuration greatly improves the training speed and the grasping performance. Furthermore, these interactions can be explicitly represented through force and torque data, facilitated by equipping the finger surfaces with multi-channel optical fibers. We have developed an interactive grasp policy that utilizes local interaction data. The proprioceptive capabilities of the fingers enable them to conform to object contact regions, ensuring a stable grasp. We then extend our research to include dexterous in-hand manipulation, specifically rotating two spheres within the hand by 180 degrees. During this task, interactions between the objects and the hand are continuously disrupted and reformed. We utilize a hand equipped with four fingers and a tactile sensor array to gather comprehensive interaction data. To effectively represent this data, we introduce the TacGNN, a generalized model for tactile information across various shapes. This model allows us to achieve in-hand manipulation using solely proprioceptive tactile sensing. In our exploration of external interactions between objects/robots and external environments, we begin with a rigid-rigid interaction within a loco-manipulation problem. Our aim is to merge interaction data from both locomotion and manipulation into a unified graph-based framework, encapsulated within the graph representation. A shared control policy is then developed through simulations and directly transferred to real-world applications in a zero-shot manner. Additionally, we investigate rigid-soft interactions through a fabric manipulation task involving deformable objects. We have developed a graph-based, environment-aware representation for fabric, which integrates environmental data. This model logically encodes interaction data, enabling each fabric segment to detect and respond to environmental contact. Employing this strategy, we successfully execute a goal-conditioned manipulation task—placing the fabric in a specified configuration within complex scenarios on the first attempt.
DegreeDoctor of Philosophy
SubjectRobotics
Machine learning
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/350344

 

DC FieldValueLanguage
dc.contributor.authorYang, Linhan-
dc.contributor.author楊林瀚-
dc.date.accessioned2024-10-23T09:46:20Z-
dc.date.available2024-10-23T09:46:20Z-
dc.date.issued2024-
dc.identifier.citationYang, L. [楊林瀚]. (2024). Rigid-soft interactive learning for robotic manipulation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/350344-
dc.description.abstractRecent years have witnessed significant advancements in the field of robotic manipulation through the adoption of machine learning methods. Unlike other domains such as computer vision and natural language processing, robotic manipulation involves complex physical interactions that pose substantial challenges for developing scalable and generalized control policies. In this thesis, we explore the understanding and represnetation learning of these interactions across various robotic manipulation scenarios. We classify these interactions into two categories: Internal Interactions between the manipulator (gripper or robot) and the objects, and External Interactions involving the objects/robots and their external environments. Focusing on the internal interactions, we initially investigate a grasp prediction task. We change the variables such as gripper stiffness (rigid or soft fingers) and the type of grasp (power or precision), which implicitly encodes interaction data within our dataset. Our experiments reveal that this configuration greatly improves the training speed and the grasping performance. Furthermore, these interactions can be explicitly represented through force and torque data, facilitated by equipping the finger surfaces with multi-channel optical fibers. We have developed an interactive grasp policy that utilizes local interaction data. The proprioceptive capabilities of the fingers enable them to conform to object contact regions, ensuring a stable grasp. We then extend our research to include dexterous in-hand manipulation, specifically rotating two spheres within the hand by 180 degrees. During this task, interactions between the objects and the hand are continuously disrupted and reformed. We utilize a hand equipped with four fingers and a tactile sensor array to gather comprehensive interaction data. To effectively represent this data, we introduce the TacGNN, a generalized model for tactile information across various shapes. This model allows us to achieve in-hand manipulation using solely proprioceptive tactile sensing. In our exploration of external interactions between objects/robots and external environments, we begin with a rigid-rigid interaction within a loco-manipulation problem. Our aim is to merge interaction data from both locomotion and manipulation into a unified graph-based framework, encapsulated within the graph representation. A shared control policy is then developed through simulations and directly transferred to real-world applications in a zero-shot manner. Additionally, we investigate rigid-soft interactions through a fabric manipulation task involving deformable objects. We have developed a graph-based, environment-aware representation for fabric, which integrates environmental data. This model logically encodes interaction data, enabling each fabric segment to detect and respond to environmental contact. Employing this strategy, we successfully execute a goal-conditioned manipulation task—placing the fabric in a specified configuration within complex scenarios on the first attempt.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshRobotics-
dc.subject.lcshMachine learning-
dc.titleRigid-soft interactive learning for robotic manipulation-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2024-
dc.date.hkucongregation2024-
dc.identifier.mmsid991044861893603414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats