File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR46437.2021.00191
- Scopus: eid_2-s2.0-85120578530
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Exploring Intermediate Representation for Monocular Vehicle Pose Estimation
Title | Exploring Intermediate Representation for Monocular Vehicle Pose Estimation |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 1873-1883 How to Cite? |
Abstract | We present a new learning-based framework to recover vehicle pose in SO(3) from a single RGB image. In contrast to previous works that map local appearance to observation angles, we explore a progressive approach by extracting meaningful Intermediate Geometrical Representations (IGRs) to estimate egocentric vehicle orientation. This approach features a deep model that transforms perceived intensities to IGRs, which are mapped to a 3D representation encoding object orientation in the camera coordinate system. Core problems are what IGRs to use and how to learn them more effectively. We answer the former question by designing IGRs based on an interpolated cuboid that derives from primitive 3D annotation readily. The latter question motivates us to incorporate geometry knowledge with a new loss function based on a projective invariant. This loss function allows unlabeled data to be used in the training stage to improve representation learning. Without additional labels, our system outperforms previous monocular RGB-based methods for joint vehicle detection and pose estimation on the KITTI benchmark, achieving performance even comparable to stereo methods. Code and pre-trained models are available at this HTTPS URL. |
Persistent Identifier | http://hdl.handle.net/10722/351435 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Shichao | - |
dc.contributor.author | Yan, Zengqiang | - |
dc.contributor.author | Li, Hongyang | - |
dc.contributor.author | Cheng, Kwang Ting | - |
dc.date.accessioned | 2024-11-20T03:56:16Z | - |
dc.date.available | 2024-11-20T03:56:16Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 1873-1883 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351435 | - |
dc.description.abstract | We present a new learning-based framework to recover vehicle pose in SO(3) from a single RGB image. In contrast to previous works that map local appearance to observation angles, we explore a progressive approach by extracting meaningful Intermediate Geometrical Representations (IGRs) to estimate egocentric vehicle orientation. This approach features a deep model that transforms perceived intensities to IGRs, which are mapped to a 3D representation encoding object orientation in the camera coordinate system. Core problems are what IGRs to use and how to learn them more effectively. We answer the former question by designing IGRs based on an interpolated cuboid that derives from primitive 3D annotation readily. The latter question motivates us to incorporate geometry knowledge with a new loss function based on a projective invariant. This loss function allows unlabeled data to be used in the training stage to improve representation learning. Without additional labels, our system outperforms previous monocular RGB-based methods for joint vehicle detection and pose estimation on the KITTI benchmark, achieving performance even comparable to stereo methods. Code and pre-trained models are available at this HTTPS URL. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.title | Exploring Intermediate Representation for Monocular Vehicle Pose Estimation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR46437.2021.00191 | - |
dc.identifier.scopus | eid_2-s2.0-85120578530 | - |
dc.identifier.spage | 1873 | - |
dc.identifier.epage | 1883 | - |