File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: Adaptive video defogging base on background modeling

TitleAdaptive video defogging base on background modeling
Authors
Issue Date2013
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Yuk, S. J. [郁順祖]. (2013). Adaptive video defogging base on background modeling. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5185918
AbstractThe performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms. In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality. Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient.
DegreeDoctor of Philosophy
SubjectElectronic surveillance
Image processing - Digital techniques
Video surveillance
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/208174
HKU Library Item IDb5185918

 

DC FieldValueLanguage
dc.contributor.authorYuk, Shun-cho, Jacky-
dc.contributor.author郁順祖-
dc.date.accessioned2015-02-20T23:07:01Z-
dc.date.available2015-02-20T23:07:01Z-
dc.date.issued2013-
dc.identifier.citationYuk, S. J. [郁順祖]. (2013). Adaptive video defogging base on background modeling. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5185918-
dc.identifier.urihttp://hdl.handle.net/10722/208174-
dc.description.abstractThe performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms. In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality. Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshElectronic surveillance-
dc.subject.lcshImage processing - Digital techniques-
dc.subject.lcshVideo surveillance-
dc.titleAdaptive video defogging base on background modeling-
dc.typePG_Thesis-
dc.identifier.hkulb5185918-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_b5185918-
dc.identifier.mmsid991036818229703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats