File Download
Supplementary

postgraduate thesis: Geometric methods for event-based vision : from system calibration to motion estimation

TitleGeometric methods for event-based vision : from system calibration to motion estimation
Authors
Issue Date2025
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Xing, W. [邢万里]. (2025). Geometric methods for event-based vision : from system calibration to motion estimation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractEvent cameras, an emerging paradigm in visual sensing, show significant potential in robotic perception due to advantages like high temporal resolution, high dynamic range, and low data redundancy. However, their unique asynchronous, sparse data stream challenges traditional frame-based vision algorithms, especially for tasks requiring precise geometric information, such as sensor calibration and motion estimation. Existing methods often fail to fully exploit the inherent geometric properties within event data—particularly the correspondence between events and moving edges in the scene—leading to limitations in robustness and accuracy, especially in challenging scenarios like wide-baseline configurations, target-free calibration, or high-dynamic motion. This thesis aims to bridge this gap by developing and validating novel geometric methods that explicitly harness the geometric constraints within event streams to address key problems in event-based vision systems. The core idea is to directly model and optimize based on the moving edge contours traced by events. The main contributions include three key areas: EventSync: A software-based synchronization method founded on explicit epipolar geometry constraints. It enables hardware-free synchronization and relative pose calibration for wide-baseline event cameras by observing a common moving object, overcoming limitations of methods relying on event pattern similarity. ELCalib: A target-free pipeline for automatic extrinsic calibration between event cameras and LiDAR. It establishes direct geometric correspondences between 3D geometric/reflectivity edges from LiDAR point clouds and the dynamic 2D edge patterns captured by the event camera during motion, solving the cross-modality calibration challenge without specialized targets. EROAM: A real-time rotational odometry and mapping system using a continuous spherical event representation and a novel Event Spherical Iterative Closest Point (ES-ICP) registration algorithm. By optimizing directly in geometric space, it avoids the constant velocity assumption and pixel quantization issues inherent in contrast maximization methods, significantly improving robustness and real-time performance under high-dynamic rotations. Collectively, these contributions demonstrate the efficacy of a geometry-centric approach for solving fundamental calibration and motion estimation problems in event-based vision. By deeply leveraging the intrinsic geometric structures within event data, this thesis provides new insights and techniques for building more capable and reliable event-based perception systems.
DegreeDoctor of Philosophy
SubjectComputer vision
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/364004

 

DC FieldValueLanguage
dc.contributor.authorXing, Wanli-
dc.contributor.author邢万里-
dc.date.accessioned2025-10-20T02:56:28Z-
dc.date.available2025-10-20T02:56:28Z-
dc.date.issued2025-
dc.identifier.citationXing, W. [邢万里]. (2025). Geometric methods for event-based vision : from system calibration to motion estimation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/364004-
dc.description.abstractEvent cameras, an emerging paradigm in visual sensing, show significant potential in robotic perception due to advantages like high temporal resolution, high dynamic range, and low data redundancy. However, their unique asynchronous, sparse data stream challenges traditional frame-based vision algorithms, especially for tasks requiring precise geometric information, such as sensor calibration and motion estimation. Existing methods often fail to fully exploit the inherent geometric properties within event data—particularly the correspondence between events and moving edges in the scene—leading to limitations in robustness and accuracy, especially in challenging scenarios like wide-baseline configurations, target-free calibration, or high-dynamic motion. This thesis aims to bridge this gap by developing and validating novel geometric methods that explicitly harness the geometric constraints within event streams to address key problems in event-based vision systems. The core idea is to directly model and optimize based on the moving edge contours traced by events. The main contributions include three key areas: EventSync: A software-based synchronization method founded on explicit epipolar geometry constraints. It enables hardware-free synchronization and relative pose calibration for wide-baseline event cameras by observing a common moving object, overcoming limitations of methods relying on event pattern similarity. ELCalib: A target-free pipeline for automatic extrinsic calibration between event cameras and LiDAR. It establishes direct geometric correspondences between 3D geometric/reflectivity edges from LiDAR point clouds and the dynamic 2D edge patterns captured by the event camera during motion, solving the cross-modality calibration challenge without specialized targets. EROAM: A real-time rotational odometry and mapping system using a continuous spherical event representation and a novel Event Spherical Iterative Closest Point (ES-ICP) registration algorithm. By optimizing directly in geometric space, it avoids the constant velocity assumption and pixel quantization issues inherent in contrast maximization methods, significantly improving robustness and real-time performance under high-dynamic rotations. Collectively, these contributions demonstrate the efficacy of a geometry-centric approach for solving fundamental calibration and motion estimation problems in event-based vision. By deeply leveraging the intrinsic geometric structures within event data, this thesis provides new insights and techniques for building more capable and reliable event-based perception systems.en
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshComputer vision-
dc.titleGeometric methods for event-based vision : from system calibration to motion estimation-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2025-
dc.identifier.mmsid991045117250303414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats