File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.5875/ausmt.v3i1.171
- Scopus: eid_2-s2.0-84899151932
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Target object identification and location based on multi-sensor fusion
Title | Target object identification and location based on multi-sensor fusion |
---|---|
Authors | |
Keywords | Camera and laser range finder Object identification and location Multi-sensor fusion Mobile manipulations |
Issue Date | 2013 |
Citation | International Journal of Automation and Smart Technology, 2013, v. 3, n. 1, p. 57-65 How to Cite? |
Abstract | For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously is a very challenging endeavor. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. A homogeneous transformation model of the system was built to overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image that includes pixel depth information. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object can be achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF. The effectiveness of the proposed method is validated by experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify a target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location. © 2013 International Journal of Automation and Smart Technology. |
Persistent Identifier | http://hdl.handle.net/10722/213407 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jiang, Yong | - |
dc.contributor.author | Wang, Hong Guang | - |
dc.contributor.author | Xi, Ning | - |
dc.date.accessioned | 2015-07-28T04:07:11Z | - |
dc.date.available | 2015-07-28T04:07:11Z | - |
dc.date.issued | 2013 | - |
dc.identifier.citation | International Journal of Automation and Smart Technology, 2013, v. 3, n. 1, p. 57-65 | - |
dc.identifier.uri | http://hdl.handle.net/10722/213407 | - |
dc.description.abstract | For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously is a very challenging endeavor. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. A homogeneous transformation model of the system was built to overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image that includes pixel depth information. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object can be achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF. The effectiveness of the proposed method is validated by experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify a target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location. © 2013 International Journal of Automation and Smart Technology. | - |
dc.language | eng | - |
dc.relation.ispartof | International Journal of Automation and Smart Technology | - |
dc.subject | Camera and laser range finder | - |
dc.subject | Object identification and location | - |
dc.subject | Multi-sensor fusion | - |
dc.subject | Mobile manipulations | - |
dc.title | Target object identification and location based on multi-sensor fusion | - |
dc.type | Article | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.5875/ausmt.v3i1.171 | - |
dc.identifier.scopus | eid_2-s2.0-84899151932 | - |
dc.identifier.volume | 3 | - |
dc.identifier.issue | 1 | - |
dc.identifier.spage | 57 | - |
dc.identifier.epage | 65 | - |
dc.identifier.eissn | 2223-9766 | - |
dc.identifier.issnl | 2223-9766 | - |