File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIM.2025.3551585
- Scopus: eid_2-s2.0-105001702327
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: LVI-GS: Tightly Coupled LiDAR–Visual–Inertial SLAM Using 3-D Gaussian Splatting
| Title | LVI-GS: Tightly Coupled LiDAR–Visual–Inertial SLAM Using 3-D Gaussian Splatting |
|---|---|
| Authors | |
| Keywords | 3-D Gaussian splatting (3DGS) 3-D reconstruction light detection and ranging (LiDAR) robotics sensor fusion simultaneous localization and mapping (SLAM) |
| Issue Date | 14-Mar-2025 |
| Publisher | IEEE |
| Citation | IEEE Transactions on Instrumentation and Measurement, 2025, v. 74 How to Cite? |
| Abstract | Three-dimensional Gaussian splatting (3DGS) has shown its ability in rapid rendering and high-fidelity mapping. In this article, we introduce a tightly coupled LiDAR-visual–inertial SLAM using 3-D Gaussian splatting (LVI-GS), which leverages the complementary characteristics of light detection and ranging (LiDAR) and image sensors to capture both geometric structures and visual details of 3-D scenes. To this end, the 3-D Gaussians are initialized from colorized LiDAR points and optimized using differentiable rendering. To achieve high-fidelity mapping, we introduce a pyramid-based training approach to effectively learn multilevel features and incorporate depth loss derived from LiDAR measurements to improve geometric feature perception. Through well-designed strategies for Gaussian map expansion, keyframe selection, thread management, and custom compute unified device architecture (CUDA) acceleration, our framework achieves real-time photorealistic mapping. Numerical experiments are performed to evaluate the superior performance of our method compared with state-of-the-art 3-D reconstruction systems. Videos of the evaluations can be found on our website: https://kwanwaipang.github.io/LVI-GS/. |
| Persistent Identifier | http://hdl.handle.net/10722/362606 |
| ISSN | 2023 Impact Factor: 5.6 2023 SCImago Journal Rankings: 1.536 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhao, Huibin | - |
| dc.contributor.author | Guan, Weipeng | - |
| dc.contributor.author | Lu, Peng | - |
| dc.date.accessioned | 2025-09-26T00:36:25Z | - |
| dc.date.available | 2025-09-26T00:36:25Z | - |
| dc.date.issued | 2025-03-14 | - |
| dc.identifier.citation | IEEE Transactions on Instrumentation and Measurement, 2025, v. 74 | - |
| dc.identifier.issn | 0018-9456 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362606 | - |
| dc.description.abstract | Three-dimensional Gaussian splatting (3DGS) has shown its ability in rapid rendering and high-fidelity mapping. In this article, we introduce a tightly coupled LiDAR-visual–inertial SLAM using 3-D Gaussian splatting (LVI-GS), which leverages the complementary characteristics of light detection and ranging (LiDAR) and image sensors to capture both geometric structures and visual details of 3-D scenes. To this end, the 3-D Gaussians are initialized from colorized LiDAR points and optimized using differentiable rendering. To achieve high-fidelity mapping, we introduce a pyramid-based training approach to effectively learn multilevel features and incorporate depth loss derived from LiDAR measurements to improve geometric feature perception. Through well-designed strategies for Gaussian map expansion, keyframe selection, thread management, and custom compute unified device architecture (CUDA) acceleration, our framework achieves real-time photorealistic mapping. Numerical experiments are performed to evaluate the superior performance of our method compared with state-of-the-art 3-D reconstruction systems. Videos of the evaluations can be found on our website: https://kwanwaipang.github.io/LVI-GS/. | - |
| dc.language | eng | - |
| dc.publisher | IEEE | - |
| dc.relation.ispartof | IEEE Transactions on Instrumentation and Measurement | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | 3-D Gaussian splatting (3DGS) | - |
| dc.subject | 3-D reconstruction | - |
| dc.subject | light detection and ranging (LiDAR) | - |
| dc.subject | robotics | - |
| dc.subject | sensor fusion | - |
| dc.subject | simultaneous localization and mapping (SLAM) | - |
| dc.title | LVI-GS: Tightly Coupled LiDAR–Visual–Inertial SLAM Using 3-D Gaussian Splatting | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TIM.2025.3551585 | - |
| dc.identifier.scopus | eid_2-s2.0-105001702327 | - |
| dc.identifier.volume | 74 | - |
| dc.identifier.eissn | 1557-9662 | - |
| dc.identifier.issnl | 0018-9456 | - |
