File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-19839-7_38
- Scopus: eid_2-s2.0-85142709374
- WOS: WOS:000903760400038
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Multimodal Transformer for Automatic 3D Annotation and Object Detection
Title | Multimodal Transformer for Automatic 3D Annotation and Object Detection |
---|---|
Authors | |
Keywords | 3d Autolabeler 3d Object detection Multimodal vision Self-attention Self-supervision Transformer |
Issue Date | 23-Oct-2022 |
Publisher | Springer |
Abstract | Despite a growing number of datasets being collected for training 3D object detection models, significant human effort is still required to annotate 3D boxes on LiDAR scans. To automate the annotation and facilitate the production of various customized datasets, we propose an end-to-end multimodal transformer (MTrans) autolabeler, which leverages both LiDAR scans and images to generate precise 3D box annotations from weak 2D bounding boxes. To alleviate the pervasive sparsity problem that hinders existing autolabelers, MTrans densifies the sparse point clouds by generating new 3D points based on 2D image information. With a multi-task design, MTrans segments the foreground/background, densifies LiDAR point clouds, and regresses 3D boxes simultaneously. Experimental results verify the effectiveness of the MTrans for improving the quality of the generated labels. By enriching the sparse point clouds, our method achieves 4.48% and 4.03% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler. MTrans can also be extended to improve the accuracy for 3D object detection, resulting in a remarkable 89.45% AP on KITTI hard samples. Codes are at https://github.com/Cliu2/MTrans. |
Persistent Identifier | http://hdl.handle.net/10722/339168 |
ISBN | |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Chang | - |
dc.contributor.author | Qian, Xiaoyan | - |
dc.contributor.author | Huang, Binxiao | - |
dc.contributor.author | Qi, Xiaojuan | - |
dc.contributor.author | Lam, Edmund | - |
dc.contributor.author | Tan, Siew-Chong | - |
dc.contributor.author | Wong, Ngai | - |
dc.date.accessioned | 2024-03-11T10:34:24Z | - |
dc.date.available | 2024-03-11T10:34:24Z | - |
dc.date.issued | 2022-10-23 | - |
dc.identifier.isbn | 9783031198380 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/339168 | - |
dc.description.abstract | <p>Despite a growing number of datasets being collected for training 3D object detection models, significant human effort is still required to annotate 3D boxes on LiDAR scans. To automate the annotation and facilitate the production of various customized datasets, we propose an end-to-end multimodal transformer (MTrans) autolabeler, which leverages both LiDAR scans and images to generate precise 3D box annotations from weak 2D bounding boxes. To alleviate the pervasive sparsity problem that hinders existing autolabelers, MTrans densifies the sparse point clouds by generating new 3D points based on 2D image information. With a multi-task design, MTrans segments the foreground/background, densifies LiDAR point clouds, and regresses 3D boxes simultaneously. Experimental results verify the effectiveness of the MTrans for improving the quality of the generated labels. By enriching the sparse point clouds, our method achieves 4.48% and 4.03% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler. MTrans can also be extended to improve the accuracy for 3D object detection, resulting in a remarkable 89.45% AP on KITTI hard samples. Codes are at https://github.com/Cliu2/MTrans.<br></p> | - |
dc.language | eng | - |
dc.publisher | Springer | - |
dc.relation.ispartof | Lecture Notes in Computer Science | - |
dc.subject | 3d Autolabeler | - |
dc.subject | 3d Object detection | - |
dc.subject | Multimodal vision | - |
dc.subject | Self-attention | - |
dc.subject | Self-supervision | - |
dc.subject | Transformer | - |
dc.title | Multimodal Transformer for Automatic 3D Annotation and Object Detection | - |
dc.type | Conference_Paper | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.1007/978-3-031-19839-7_38 | - |
dc.identifier.scopus | eid_2-s2.0-85142709374 | - |
dc.identifier.volume | 13698 LNCS | - |
dc.identifier.issue | 13698 | - |
dc.identifier.spage | 657 | - |
dc.identifier.epage | 673 | - |
dc.identifier.eissn | 1611-3349 | - |
dc.identifier.isi | WOS:000903760400038 | - |
dc.identifier.eisbn | 9783031198397 | - |
dc.identifier.issnl | 0302-9743 | - |