File Download
Supplementary

postgraduate thesis: Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams

TitleSpatio-temporal reconstruction of multiple dynamic objects in LiDAR streams
Authors
Advisors
Advisor(s):Zhang, FLam, J
Issue Date2025
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
He, R. [何锐]. (2025). Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractMobile robots such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) are increasingly deployed in hazardous or complex environments, in- cluding disaster zones, construction sites, and autonomous driving. Equipped with advanced sensors, particularly LiDAR, these robots can achieve high-precision 3D en- vironmental understanding. LiDAR sensors provide dense, long-range, and accurate point clouds, enabling robust mapping and perception. Despite these advancements, dynamic object reconstruction in real-world environments remains challenging. Ac- curately detecting, tracking, and reconstructing moving objects such as vehicles and pedestrians is complicated by occlusions, sparse measurements, and temporal varia- tion. While learning-based methods have achieved strong results, they often struggle togeneralizeacrossdifferentLiDARtypesandrequireextensivetraining,makingreal- timedeploymentdifficult. This thesis introduces 4DRecon, a real-time, model-based framework for dynamic object reconstruction from sequential LiDAR data. Our method integrates motion- aware dynamic point detection via M-detector and robust state estimation using an Error-StateIteratedKalmanFilter(ESIKF),enablingjointtrackingand3Dshaperefine- ment of multiple moving objects over time. 4DRecon processes raw LiDAR streams without requiring training data, ensuring generalizability across various sensor types and environments. Evaluations on Waymo and KITTI datasets demonstrate that our method achieves up to 25× faster runtime than state-of-the-art model-based methods (e.g., LIDAR-SOT), while maintaining a reconstruction accuracy of 20 cm. By avoiding data-drivendependenciesandenablinghigh-speed,accurate,andgeneralizablerecon- struction,4DRecon providesarobustsolution for real-time perception in dynamic and cluttered environments.complex environments.
DegreeMaster of Philosophy
SubjectOptical radar
Kalman filtering
Pattern recognition systems
Dept/ProgramMechanical Engineering
Persistent Identifierhttp://hdl.handle.net/10722/367451

 

DC FieldValueLanguage
dc.contributor.advisorZhang, F-
dc.contributor.advisorLam, J-
dc.contributor.authorHe, Rui-
dc.contributor.author何锐-
dc.date.accessioned2025-12-11T06:42:12Z-
dc.date.available2025-12-11T06:42:12Z-
dc.date.issued2025-
dc.identifier.citationHe, R. [何锐]. (2025). Spatio-temporal reconstruction of multiple dynamic objects in LiDAR streams. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/367451-
dc.description.abstractMobile robots such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) are increasingly deployed in hazardous or complex environments, in- cluding disaster zones, construction sites, and autonomous driving. Equipped with advanced sensors, particularly LiDAR, these robots can achieve high-precision 3D en- vironmental understanding. LiDAR sensors provide dense, long-range, and accurate point clouds, enabling robust mapping and perception. Despite these advancements, dynamic object reconstruction in real-world environments remains challenging. Ac- curately detecting, tracking, and reconstructing moving objects such as vehicles and pedestrians is complicated by occlusions, sparse measurements, and temporal varia- tion. While learning-based methods have achieved strong results, they often struggle togeneralizeacrossdifferentLiDARtypesandrequireextensivetraining,makingreal- timedeploymentdifficult. This thesis introduces 4DRecon, a real-time, model-based framework for dynamic object reconstruction from sequential LiDAR data. Our method integrates motion- aware dynamic point detection via M-detector and robust state estimation using an Error-StateIteratedKalmanFilter(ESIKF),enablingjointtrackingand3Dshaperefine- ment of multiple moving objects over time. 4DRecon processes raw LiDAR streams without requiring training data, ensuring generalizability across various sensor types and environments. Evaluations on Waymo and KITTI datasets demonstrate that our method achieves up to 25× faster runtime than state-of-the-art model-based methods (e.g., LIDAR-SOT), while maintaining a reconstruction accuracy of 20 cm. By avoiding data-drivendependenciesandenablinghigh-speed,accurate,andgeneralizablerecon- struction,4DRecon providesarobustsolution for real-time perception in dynamic and cluttered environments.complex environments.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshOptical radar-
dc.subject.lcshKalman filtering-
dc.subject.lcshPattern recognition systems-
dc.titleSpatio-temporal reconstruction of multiple dynamic objects in LiDAR streams-
dc.typePG_Thesis-
dc.description.thesisnameMaster of Philosophy-
dc.description.thesislevelMaster-
dc.description.thesisdisciplineMechanical Engineering-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2025-
dc.identifier.mmsid991045147147903414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats