File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Target object identification and location based on multi-sensor fusion

TitleTarget object identification and location based on multi-sensor fusion
Authors
KeywordsCamera and laser range finder
Object identification and location
Multi-sensor fusion
Mobile manipulations
Issue Date2013
Citation
International Journal of Automation and Smart Technology, 2013, v. 3, n. 1, p. 57-65 How to Cite?
AbstractFor an unknown environment, how to make a mobile robot identify a target object and locate it autonomously is a very challenging endeavor. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. A homogeneous transformation model of the system was built to overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image that includes pixel depth information. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object can be achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF. The effectiveness of the proposed method is validated by experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify a target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location. © 2013 International Journal of Automation and Smart Technology.
Persistent Identifierhttp://hdl.handle.net/10722/213407

 

DC FieldValueLanguage
dc.contributor.authorJiang, Yong-
dc.contributor.authorWang, Hong Guang-
dc.contributor.authorXi, Ning-
dc.date.accessioned2015-07-28T04:07:11Z-
dc.date.available2015-07-28T04:07:11Z-
dc.date.issued2013-
dc.identifier.citationInternational Journal of Automation and Smart Technology, 2013, v. 3, n. 1, p. 57-65-
dc.identifier.urihttp://hdl.handle.net/10722/213407-
dc.description.abstractFor an unknown environment, how to make a mobile robot identify a target object and locate it autonomously is a very challenging endeavor. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. A homogeneous transformation model of the system was built to overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image that includes pixel depth information. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object can be achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF. The effectiveness of the proposed method is validated by experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify a target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location. © 2013 International Journal of Automation and Smart Technology.-
dc.languageeng-
dc.relation.ispartofInternational Journal of Automation and Smart Technology-
dc.subjectCamera and laser range finder-
dc.subjectObject identification and location-
dc.subjectMulti-sensor fusion-
dc.subjectMobile manipulations-
dc.titleTarget object identification and location based on multi-sensor fusion-
dc.typeArticle-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.5875/ausmt.v3i1.171-
dc.identifier.scopuseid_2-s2.0-84899151932-
dc.identifier.volume3-
dc.identifier.issue1-
dc.identifier.spage57-
dc.identifier.epage65-
dc.identifier.eissn2223-9766-
dc.identifier.issnl2223-9766-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats