File Download
There are no files associated with this item.
Supplementary
Conference Paper: DDP: Diffusion model for dense visual prediction
Title | DDP: Diffusion model for dense visual prediction |
---|---|
Authors | |
Issue Date | 2-Oct-2023 |
Abstract | We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline. Our approach follows a "noise-to-map" generative paradigm for prediction by progressively removing noise from a random Gaussian distribution, guided by the image. The method, called DDP, efficiently extends the denoising diffusion process into the modern perception pipeline. Without task-specific design and architecture customization, DDP is easy to generalize to most dense prediction tasks, e.g., semantic segmentation and depth estimation. In addition, DDP shows attractive properties such as dynamic inference and uncertainty awareness, in contrast to previous single-step discriminative methods. We show top results on three representative tasks with six diverse benchmarks, without tricks, DDP achieves state-of-the-art or competitive performance on each task compared to the specialist counterparts. For example, semantic segmentation (83.9 mIoU on Cityscapes), BEV map segmentation (70.6 mIoU on nuScenes), and depth estimation (0.05 REL on KITTI). We hope that our approach will serve as a solid baseline and facilitate future research |
Persistent Identifier | http://hdl.handle.net/10722/337773 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ji, Yuanfeng | - |
dc.contributor.author | Chen, Zhe | - |
dc.contributor.author | Xie, Enze | - |
dc.contributor.author | Hong, Lanqing | - |
dc.contributor.author | Liu, Xihui | - |
dc.contributor.author | Liu, Zhaoqiang | - |
dc.contributor.author | Lu, Tong | - |
dc.contributor.author | Li, Zhenguo | - |
dc.contributor.author | Luo, Ping | - |
dc.date.accessioned | 2024-03-11T10:23:47Z | - |
dc.date.available | 2024-03-11T10:23:47Z | - |
dc.date.issued | 2023-10-02 | - |
dc.identifier.uri | http://hdl.handle.net/10722/337773 | - |
dc.description.abstract | <p>We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline. Our approach follows a "noise-to-map" generative paradigm for prediction by progressively removing noise from a random Gaussian distribution, guided by the image. The method, called DDP, efficiently extends the denoising diffusion process into the modern perception pipeline. Without task-specific design and architecture customization, DDP is easy to generalize to most dense prediction tasks, e.g., semantic segmentation and depth estimation. In addition, DDP shows attractive properties such as dynamic inference and uncertainty awareness, in contrast to previous single-step discriminative methods. We show top results on three representative tasks with six diverse benchmarks, without tricks, DDP achieves state-of-the-art or competitive performance on each task compared to the specialist counterparts. For example, semantic segmentation (83.9 mIoU on Cityscapes), BEV map segmentation (70.6 mIoU on nuScenes), and depth estimation (0.05 REL on KITTI). We hope that our approach will serve as a solid baseline and facilitate future research<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE International Conference on Computer Vision 2023 (02/10/2023-06/10/2023, Paris) | - |
dc.title | DDP: Diffusion model for dense visual prediction | - |
dc.type | Conference_Paper | - |