File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Adaptive video defogging base on background modeling
Title | Adaptive video defogging base on background modeling |
---|---|
Authors | |
Issue Date | 2013 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Yuk, S. J. [郁順祖]. (2013). Adaptive video defogging base on background modeling. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5185918 |
Abstract | The performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms.
In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality.
Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient. |
Degree | Doctor of Philosophy |
Subject | Electronic surveillance Image processing - Digital techniques Video surveillance |
Dept/Program | Computer Science |
Persistent Identifier | http://hdl.handle.net/10722/208174 |
HKU Library Item ID | b5185918 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yuk, Shun-cho, Jacky | - |
dc.contributor.author | 郁順祖 | - |
dc.date.accessioned | 2015-02-20T23:07:01Z | - |
dc.date.available | 2015-02-20T23:07:01Z | - |
dc.date.issued | 2013 | - |
dc.identifier.citation | Yuk, S. J. [郁順祖]. (2013). Adaptive video defogging base on background modeling. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5185918 | - |
dc.identifier.uri | http://hdl.handle.net/10722/208174 | - |
dc.description.abstract | The performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms. In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality. Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Electronic surveillance | - |
dc.subject.lcsh | Image processing - Digital techniques | - |
dc.subject.lcsh | Video surveillance | - |
dc.title | Adaptive video defogging base on background modeling | - |
dc.type | PG_Thesis | - |
dc.identifier.hkul | b5185918 | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Computer Science | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.5353/th_b5185918 | - |
dc.identifier.mmsid | 991036818229703414 | - |