File Download

There are no files associated with this item.

Supplementary

Conference Paper: Projecting your view attentively: Monocular road scene layout estimation via cross-view transformation

TitleProjecting your view attentively: Monocular road scene layout estimation via cross-view transformation
Authors
Issue Date2021
Citation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 15536-15545 How to Cite?
AbstractHD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to the deployed expensive sensors and time-consuming computation. Camera-based methods usually need to separately perform road segmentation and view transformation, which often causes distortion and the absence of content. To push the limits of the technology, we present a novel framework that enables reconstructing a local map formed by road layout and vehicle occupancy in the bird's-eye view given a front-view monocular image only. In particular, we propose a cross-view transformation module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. Considering the relationship between vehicles and roads, we also design a context-aware discriminator to further refine the results. Experiments on public benchmarks show that our method achieves the state-of-the-art performance in the tasks of road layout estimation and vehicle occupancy estimation. Especially for the latter task, our model outperforms all competitors by a large margin. Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
DescriptionPaper Session Eleven - paper ID: 4588
Persistent Identifierhttp://hdl.handle.net/10722/300711

 

DC FieldValueLanguage
dc.contributor.authorYang, WX-
dc.contributor.authorLi, Q-
dc.contributor.authorLiu, W-
dc.contributor.authorYu, YL-
dc.contributor.authorMA, Y-
dc.contributor.authorHe, SF-
dc.contributor.authorPan, J-
dc.date.accessioned2021-06-18T14:55:57Z-
dc.date.available2021-06-18T14:55:57Z-
dc.date.issued2021-
dc.identifier.citationIEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 15536-15545-
dc.identifier.urihttp://hdl.handle.net/10722/300711-
dc.descriptionPaper Session Eleven - paper ID: 4588-
dc.description.abstractHD map reconstruction is crucial for autonomous driving. LiDAR-based methods are limited due to the deployed expensive sensors and time-consuming computation. Camera-based methods usually need to separately perform road segmentation and view transformation, which often causes distortion and the absence of content. To push the limits of the technology, we present a novel framework that enables reconstructing a local map formed by road layout and vehicle occupancy in the bird's-eye view given a front-view monocular image only. In particular, we propose a cross-view transformation module, which takes the constraint of cycle consistency between views into account and makes full use of their correlation to strengthen the view transformation and scene understanding. Considering the relationship between vehicles and roads, we also design a context-aware discriminator to further refine the results. Experiments on public benchmarks show that our method achieves the state-of-the-art performance in the tasks of road layout estimation and vehicle occupancy estimation. Especially for the latter task, our model outperforms all competitors by a large margin. Furthermore, our model runs at 35 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.-
dc.languageeng-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.titleProjecting your view attentively: Monocular road scene layout estimation via cross-view transformation-
dc.typeConference_Paper-
dc.identifier.emailPan, J: jpan@cs.hku.hk-
dc.identifier.authorityPan, J=rp01984-
dc.identifier.hkuros323048-
dc.identifier.spage15536-
dc.identifier.epage15545-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats