File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams
| Title | Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams |
|---|---|
| Authors | |
| Advisors | |
| Issue Date | 2025 |
| Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
| Citation | He, R. [何锐]. (2025). Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
| Abstract | Mobile robots such as unmanned ground vehicles (UGVs) and unmanned aerial
vehicles (UAVs) are increasingly deployed in hazardous or complex environments, in-
cluding disaster zones, construction sites, and autonomous driving. Equipped with
advanced sensors, particularly LiDAR, these robots can achieve high-precision 3D en-
vironmental understanding. LiDAR sensors provide dense, long-range, and accurate
point clouds, enabling robust mapping and perception. Despite these advancements,
dynamic object reconstruction in real-world environments remains challenging. Ac-
curately detecting, tracking, and reconstructing moving objects such as vehicles and
pedestrians is complicated by occlusions, sparse measurements, and temporal varia-
tion. While learning-based methods have achieved strong results, they often struggle
togeneralizeacrossdifferentLiDARtypesandrequireextensivetraining,makingreal-
timedeploymentdifficult.
This thesis introduces 4DRecon, a real-time, model-based framework for dynamic
object reconstruction from sequential LiDAR data. Our method integrates motion-
aware dynamic point detection via M-detector and robust state estimation using an
Error-StateIteratedKalmanFilter(ESIKF),enablingjointtrackingand3Dshaperefine-
ment of multiple moving objects over time. 4DRecon processes raw LiDAR streams
without requiring training data, ensuring generalizability across various sensor types
and environments. Evaluations on Waymo and KITTI datasets demonstrate that our
method achieves up to 25× faster runtime than state-of-the-art model-based methods
(e.g., LIDAR-SOT), while maintaining a reconstruction accuracy of 20 cm. By avoiding
data-drivendependenciesandenablinghigh-speed,accurate,andgeneralizablerecon-
struction,4DRecon providesarobustsolution for real-time perception in dynamic and
cluttered environments.complex environments. |
| Degree | Master of Philosophy |
| Subject | Optical radar Kalman filtering Pattern recognition systems |
| Dept/Program | Mechanical Engineering |
| Persistent Identifier | http://hdl.handle.net/10722/367451 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.advisor | Zhang, F | - |
| dc.contributor.advisor | Lam, J | - |
| dc.contributor.author | He, Rui | - |
| dc.contributor.author | 何锐 | - |
| dc.date.accessioned | 2025-12-11T06:42:12Z | - |
| dc.date.available | 2025-12-11T06:42:12Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | He, R. [何锐]. (2025). Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
| dc.identifier.uri | http://hdl.handle.net/10722/367451 | - |
| dc.description.abstract | Mobile robots such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) are increasingly deployed in hazardous or complex environments, in- cluding disaster zones, construction sites, and autonomous driving. Equipped with advanced sensors, particularly LiDAR, these robots can achieve high-precision 3D en- vironmental understanding. LiDAR sensors provide dense, long-range, and accurate point clouds, enabling robust mapping and perception. Despite these advancements, dynamic object reconstruction in real-world environments remains challenging. Ac- curately detecting, tracking, and reconstructing moving objects such as vehicles and pedestrians is complicated by occlusions, sparse measurements, and temporal varia- tion. While learning-based methods have achieved strong results, they often struggle togeneralizeacrossdifferentLiDARtypesandrequireextensivetraining,makingreal- timedeploymentdifficult. This thesis introduces 4DRecon, a real-time, model-based framework for dynamic object reconstruction from sequential LiDAR data. Our method integrates motion- aware dynamic point detection via M-detector and robust state estimation using an Error-StateIteratedKalmanFilter(ESIKF),enablingjointtrackingand3Dshaperefine- ment of multiple moving objects over time. 4DRecon processes raw LiDAR streams without requiring training data, ensuring generalizability across various sensor types and environments. Evaluations on Waymo and KITTI datasets demonstrate that our method achieves up to 25× faster runtime than state-of-the-art model-based methods (e.g., LIDAR-SOT), while maintaining a reconstruction accuracy of 20 cm. By avoiding data-drivendependenciesandenablinghigh-speed,accurate,andgeneralizablerecon- struction,4DRecon providesarobustsolution for real-time perception in dynamic and cluttered environments.complex environments. | - |
| dc.language | eng | - |
| dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
| dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
| dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject.lcsh | Optical radar | - |
| dc.subject.lcsh | Kalman filtering | - |
| dc.subject.lcsh | Pattern recognition systems | - |
| dc.title | Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams | - |
| dc.type | PG_Thesis | - |
| dc.description.thesisname | Master of Philosophy | - |
| dc.description.thesislevel | Master | - |
| dc.description.thesisdiscipline | Mechanical Engineering | - |
| dc.description.nature | published_or_final_version | - |
| dc.date.hkucongregation | 2025 | - |
| dc.identifier.mmsid | 991045147147903414 | - |
