File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Geometric methods for event-based vision : from system calibration to motion estimation
| Title | Geometric methods for event-based vision : from system calibration to motion estimation |
|---|---|
| Authors | |
| Issue Date | 2025 |
| Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
| Citation | Xing, W. [邢万里]. (2025). Geometric methods for event-based vision : from system calibration to motion estimation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
| Abstract | Event cameras, an emerging paradigm in visual sensing, show significant potential in robotic perception due to advantages like high temporal resolution, high dynamic range, and low data redundancy. However, their unique asynchronous, sparse data stream challenges traditional frame-based vision algorithms, especially for tasks requiring precise geometric information, such as sensor calibration and motion estimation. Existing methods often fail to fully exploit the inherent geometric properties within event data—particularly the correspondence between events and moving edges in the scene—leading to limitations in robustness and accuracy, especially in challenging scenarios like wide-baseline configurations, target-free calibration, or high-dynamic motion.
This thesis aims to bridge this gap by developing and validating novel geometric methods that explicitly harness the geometric constraints within event streams to address key problems in event-based vision systems. The core idea is to directly model and optimize based on the moving edge contours traced by events. The main contributions include three key areas:
EventSync: A software-based synchronization method founded on explicit epipolar geometry constraints. It enables hardware-free synchronization and relative pose calibration for wide-baseline event cameras by observing a common moving object, overcoming limitations of methods relying on event pattern similarity.
ELCalib: A target-free pipeline for automatic extrinsic calibration between event cameras and LiDAR. It establishes direct geometric correspondences between 3D geometric/reflectivity edges from LiDAR point clouds and the dynamic 2D edge patterns captured by the event camera during motion, solving the cross-modality calibration challenge without specialized targets.
EROAM: A real-time rotational odometry and mapping system using a continuous spherical event representation and a novel Event Spherical Iterative Closest Point (ES-ICP) registration algorithm. By optimizing directly in geometric space, it avoids the constant velocity assumption and pixel quantization issues inherent in contrast maximization methods, significantly improving robustness and real-time performance under high-dynamic rotations.
Collectively, these contributions demonstrate the efficacy of a geometry-centric approach for solving fundamental calibration and motion estimation problems in event-based vision. By deeply leveraging the intrinsic geometric structures within event data, this thesis provides new insights and techniques for building more capable and reliable event-based perception systems. |
| Degree | Doctor of Philosophy |
| Subject | Computer vision |
| Dept/Program | Computer Science |
| Persistent Identifier | http://hdl.handle.net/10722/364004 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Xing, Wanli | - |
| dc.contributor.author | 邢万里 | - |
| dc.date.accessioned | 2025-10-20T02:56:28Z | - |
| dc.date.available | 2025-10-20T02:56:28Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Xing, W. [邢万里]. (2025). Geometric methods for event-based vision : from system calibration to motion estimation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
| dc.identifier.uri | http://hdl.handle.net/10722/364004 | - |
| dc.description.abstract | Event cameras, an emerging paradigm in visual sensing, show significant potential in robotic perception due to advantages like high temporal resolution, high dynamic range, and low data redundancy. However, their unique asynchronous, sparse data stream challenges traditional frame-based vision algorithms, especially for tasks requiring precise geometric information, such as sensor calibration and motion estimation. Existing methods often fail to fully exploit the inherent geometric properties within event data—particularly the correspondence between events and moving edges in the scene—leading to limitations in robustness and accuracy, especially in challenging scenarios like wide-baseline configurations, target-free calibration, or high-dynamic motion. This thesis aims to bridge this gap by developing and validating novel geometric methods that explicitly harness the geometric constraints within event streams to address key problems in event-based vision systems. The core idea is to directly model and optimize based on the moving edge contours traced by events. The main contributions include three key areas: EventSync: A software-based synchronization method founded on explicit epipolar geometry constraints. It enables hardware-free synchronization and relative pose calibration for wide-baseline event cameras by observing a common moving object, overcoming limitations of methods relying on event pattern similarity. ELCalib: A target-free pipeline for automatic extrinsic calibration between event cameras and LiDAR. It establishes direct geometric correspondences between 3D geometric/reflectivity edges from LiDAR point clouds and the dynamic 2D edge patterns captured by the event camera during motion, solving the cross-modality calibration challenge without specialized targets. EROAM: A real-time rotational odometry and mapping system using a continuous spherical event representation and a novel Event Spherical Iterative Closest Point (ES-ICP) registration algorithm. By optimizing directly in geometric space, it avoids the constant velocity assumption and pixel quantization issues inherent in contrast maximization methods, significantly improving robustness and real-time performance under high-dynamic rotations. Collectively, these contributions demonstrate the efficacy of a geometry-centric approach for solving fundamental calibration and motion estimation problems in event-based vision. By deeply leveraging the intrinsic geometric structures within event data, this thesis provides new insights and techniques for building more capable and reliable event-based perception systems. | en |
| dc.language | eng | - |
| dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
| dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
| dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject.lcsh | Computer vision | - |
| dc.title | Geometric methods for event-based vision : from system calibration to motion estimation | - |
| dc.type | PG_Thesis | - |
| dc.description.thesisname | Doctor of Philosophy | - |
| dc.description.thesislevel | Doctoral | - |
| dc.description.thesisdiscipline | Computer Science | - |
| dc.description.nature | published_or_final_version | - |
| dc.date.hkucongregation | 2025 | - |
| dc.identifier.mmsid | 991045117250303414 | - |
