File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: Visual servoing path-planning for generalized cameras and objects

TitleVisual servoing path-planning for generalized cameras and objects
Authors
Advisors
Advisor(s):Chesi, GHung, YS
Issue Date2013
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Shen, T. [沈添天]. (2013). Visual servoing path-planning for generalized cameras and objects. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5089987
AbstractVisual servoing (VS) is an automatic control technique which uses vision feedback to control the robot motion. Eye-in-hand VS systems, with the vision sensor mounted directly on the robot end-effector have received significant attention, in particular for the task of steering the vision sensor (usually a camera) from the present position to the desired one identified by image features shown in advance. The servo uses the difference between the present and the desired views (shown a priori) of some objects to develop real-time driving signals. This approach is also known as “teach-by-showing” method. To accomplish such a task, many constraints and limits are required such as camera field of view (FOV), robot joint limits, collision and occlusion avoidance, and etc. Path-planning technologies, as one branch of high-level control strategies, are explored in this thesis to impose these constraints for VS tasks with respect to different types of cameras and objects. First, a VS path-planning strategy is proposed for a class of cameras that include conventional perspective cameras, fisheye cameras, and catadioptric systems. These cameras are described by adopting a unified mathematical model and the strategy consists of designing image trajectories that allow the camera to reach the desired position while satisfying the camera FOV limit and the end-effector collision avoidance. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible camera trajectory through polynomial programming. The computed image trajectory is hence tracked by an image-based visual servoing (IBVS) controller. Experimental results with a fisheye camera mounted on a 6-degree-of-freedom (6-DoF) robot arm illustrate the proposed strategy. Second, this thesis proposes a path-planning strategy for visual servoing with image moments, in the case of which the observed features are not restrained to points. Image moments of some solid objects such as circle, sphere, and etc. are more intuitive features than the dominant feature points in VS applications. The problem consists of planning a trajectory in order to ensure the convergence of the robot end-effector to the desired position while satisfying workspace (Cartesian space) constraints of the robot end-effector and visibility constraints of these solid objects, in particular including collision and occlusion avoidance. A solution based on polynomial parametrization is proposed and validated by some simulation and experiment results. Third, constrained optimization is combined with robot teach-by-demonstration to address simultaneously visibility constraint, joint limits and whole-arm collisions for robust vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential whole-arm collisions. Constrained optimization uses these safe regions to generate new feasible trajectories under visibility constraint that achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, camera trajectories that traverse a set of selected control points are modeled and optimized using either quintic Hermite splines or polynomials with C2 continuity. Experiments with a 7-DoF articulated arm validate the proposed method.
DegreeDoctor of Philosophy
SubjectServomechanisms.
Robots - Control systems.
Computer vision.
Dept/ProgramElectrical and Electronic Engineering
Persistent Identifierhttp://hdl.handle.net/10722/192842
HKU Library Item IDb5089987

 

DC FieldValueLanguage
dc.contributor.advisorChesi, G-
dc.contributor.advisorHung, YS-
dc.contributor.authorShen, Tiantian.-
dc.contributor.author沈添天.-
dc.date.accessioned2013-11-24T02:01:07Z-
dc.date.available2013-11-24T02:01:07Z-
dc.date.issued2013-
dc.identifier.citationShen, T. [沈添天]. (2013). Visual servoing path-planning for generalized cameras and objects. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5089987-
dc.identifier.urihttp://hdl.handle.net/10722/192842-
dc.description.abstractVisual servoing (VS) is an automatic control technique which uses vision feedback to control the robot motion. Eye-in-hand VS systems, with the vision sensor mounted directly on the robot end-effector have received significant attention, in particular for the task of steering the vision sensor (usually a camera) from the present position to the desired one identified by image features shown in advance. The servo uses the difference between the present and the desired views (shown a priori) of some objects to develop real-time driving signals. This approach is also known as “teach-by-showing” method. To accomplish such a task, many constraints and limits are required such as camera field of view (FOV), robot joint limits, collision and occlusion avoidance, and etc. Path-planning technologies, as one branch of high-level control strategies, are explored in this thesis to impose these constraints for VS tasks with respect to different types of cameras and objects. First, a VS path-planning strategy is proposed for a class of cameras that include conventional perspective cameras, fisheye cameras, and catadioptric systems. These cameras are described by adopting a unified mathematical model and the strategy consists of designing image trajectories that allow the camera to reach the desired position while satisfying the camera FOV limit and the end-effector collision avoidance. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible camera trajectory through polynomial programming. The computed image trajectory is hence tracked by an image-based visual servoing (IBVS) controller. Experimental results with a fisheye camera mounted on a 6-degree-of-freedom (6-DoF) robot arm illustrate the proposed strategy. Second, this thesis proposes a path-planning strategy for visual servoing with image moments, in the case of which the observed features are not restrained to points. Image moments of some solid objects such as circle, sphere, and etc. are more intuitive features than the dominant feature points in VS applications. The problem consists of planning a trajectory in order to ensure the convergence of the robot end-effector to the desired position while satisfying workspace (Cartesian space) constraints of the robot end-effector and visibility constraints of these solid objects, in particular including collision and occlusion avoidance. A solution based on polynomial parametrization is proposed and validated by some simulation and experiment results. Third, constrained optimization is combined with robot teach-by-demonstration to address simultaneously visibility constraint, joint limits and whole-arm collisions for robust vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential whole-arm collisions. Constrained optimization uses these safe regions to generate new feasible trajectories under visibility constraint that achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, camera trajectories that traverse a set of selected control points are modeled and optimized using either quintic Hermite splines or polynomials with C2 continuity. Experiments with a 7-DoF articulated arm validate the proposed method.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.source.urihttp://hub.hku.hk/bib/B50899879-
dc.subject.lcshServomechanisms.-
dc.subject.lcshRobots - Control systems.-
dc.subject.lcshComputer vision.-
dc.titleVisual servoing path-planning for generalized cameras and objects-
dc.typePG_Thesis-
dc.identifier.hkulb5089987-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineElectrical and Electronic Engineering-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_b5089987-
dc.date.hkucongregation2013-
dc.identifier.mmsid991035825979703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats