File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.neucom.2023.127122
- Scopus: eid_2-s2.0-85180376525
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: GlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation
Title | GlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation |
---|---|
Authors | |
Keywords | All-day image Feature fusion Monocular depth estimation Self-supervised |
Issue Date | 7-Feb-2024 |
Publisher | Elsevier |
Citation | Neurocomputing, 2024, v. 569 How to Cite? |
Abstract | In recent years, self-supervised monocular depth estimation has drawn much attention since it frees of depth annotations and achieves remarkable results on standard benchmarks. However, most of existing methods only focus on either daytime or nighttime images, their performance degrades on the other domain because of the large gap between daytime and nighttime images. To address this problem, we propose a two-branch network named GlocalFuse-Depth for self-supervised depth estimation of all-day images in this paper. The daytime and nighttime images in input image pair are fed into the two branches: CNN branch and Transformer branch, respectively, where both local details and global dependency can be effectively captured. Besides, a novel fusion module is proposed to fuse multi-dimensional features from the two branches. Extensive experiments demonstrate that GlocalFuse-Depth achieves state-of-the-art results for all-day images of the Oxford RobotCar dataset, which proves the superiority of our method. |
Persistent Identifier | http://hdl.handle.net/10722/348518 |
ISSN | 2023 Impact Factor: 5.5 2023 SCImago Journal Rankings: 1.815 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Zezheng | - |
dc.contributor.author | Chan, Ryan KY | - |
dc.contributor.author | Wong, Kenneth KY | - |
dc.date.accessioned | 2024-10-10T00:31:16Z | - |
dc.date.available | 2024-10-10T00:31:16Z | - |
dc.date.issued | 2024-02-07 | - |
dc.identifier.citation | Neurocomputing, 2024, v. 569 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | http://hdl.handle.net/10722/348518 | - |
dc.description.abstract | In recent years, self-supervised monocular depth estimation has drawn much attention since it frees of depth annotations and achieves remarkable results on standard benchmarks. However, most of existing methods only focus on either daytime or nighttime images, their performance degrades on the other domain because of the large gap between daytime and nighttime images. To address this problem, we propose a two-branch network named GlocalFuse-Depth for self-supervised depth estimation of all-day images in this paper. The daytime and nighttime images in input image pair are fed into the two branches: CNN branch and Transformer branch, respectively, where both local details and global dependency can be effectively captured. Besides, a novel fusion module is proposed to fuse multi-dimensional features from the two branches. Extensive experiments demonstrate that GlocalFuse-Depth achieves state-of-the-art results for all-day images of the Oxford RobotCar dataset, which proves the superiority of our method. | - |
dc.language | eng | - |
dc.publisher | Elsevier | - |
dc.relation.ispartof | Neurocomputing | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | All-day image | - |
dc.subject | Feature fusion | - |
dc.subject | Monocular depth estimation | - |
dc.subject | Self-supervised | - |
dc.title | GlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.neucom.2023.127122 | - |
dc.identifier.scopus | eid_2-s2.0-85180376525 | - |
dc.identifier.volume | 569 | - |
dc.identifier.eissn | 1872-8286 | - |
dc.identifier.issnl | 0925-2312 | - |