File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Accurate depth estimation from a hybrid event-RGB stereo setup

TitleAccurate depth estimation from a hybrid event-RGB stereo setup
Authors
Issue Date2021
Citation
IEEE International Conference on Intelligent Robots and Systems, 2021, p. 6833-6840 How to Cite?
AbstractEvent-based visual perception is becoming increasingly popular owing to interesting sensor characteristics enabling the handling of difficult conditions such as highly dynamic motion or challenging illumination. The mostly complementary nature of event cameras however still means that best results are achieved if the sensor is paired with a regular frame-based sensor. The present work aims at answering a simple question: Assuming that both cameras do not share a common optical center, is it possible to exploit the hybrid stereo setup's baseline to perform accurate stereo depth estimation We present a learning based solution to this problem leveraging modern spatio-temporal input representations as well as a novel hybrid pyramid attention module. Results on real data demonstrate competitive performance against pure frame-based stereo alternatives as well as the ability to maintain the advantageous properties of event-based sensors.
Persistent Identifierhttp://hdl.handle.net/10722/345167
ISSN
2023 SCImago Journal Rankings: 1.094

 

DC FieldValueLanguage
dc.contributor.authorZuo, Yi Fan-
dc.contributor.authorCui, Li-
dc.contributor.authorPeng, Xin-
dc.contributor.authorXu, Yanyu-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorWang, Xia-
dc.contributor.authorKneip, Laurent-
dc.date.accessioned2024-08-15T09:25:39Z-
dc.date.available2024-08-15T09:25:39Z-
dc.date.issued2021-
dc.identifier.citationIEEE International Conference on Intelligent Robots and Systems, 2021, p. 6833-6840-
dc.identifier.issn2153-0858-
dc.identifier.urihttp://hdl.handle.net/10722/345167-
dc.description.abstractEvent-based visual perception is becoming increasingly popular owing to interesting sensor characteristics enabling the handling of difficult conditions such as highly dynamic motion or challenging illumination. The mostly complementary nature of event cameras however still means that best results are achieved if the sensor is paired with a regular frame-based sensor. The present work aims at answering a simple question: Assuming that both cameras do not share a common optical center, is it possible to exploit the hybrid stereo setup's baseline to perform accurate stereo depth estimation We present a learning based solution to this problem leveraging modern spatio-temporal input representations as well as a novel hybrid pyramid attention module. Results on real data demonstrate competitive performance against pure frame-based stereo alternatives as well as the ability to maintain the advantageous properties of event-based sensors.-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Intelligent Robots and Systems-
dc.titleAccurate depth estimation from a hybrid event-RGB stereo setup-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/IROS51168.2021.9635834-
dc.identifier.scopuseid_2-s2.0-85124371985-
dc.identifier.spage6833-
dc.identifier.epage6840-
dc.identifier.eissn2153-0866-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats