File Download
Supplementary

postgraduate thesis: Towards robust image recognition via deep generative classifiers

TitleTowards robust image recognition via deep generative classifiers
Authors
Advisors
Advisor(s):Yiu, SM
Issue Date2020
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Wang, X. [王昕]. (2020). Towards robust image recognition via deep generative classifiers. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractRecent years have witnessed the success of deep neural network models in image recognition. Yet at the same time, they are also surprisingly vulnerable to malicious inputs, e.g. adversarial examples, corrupted examples, out-of-distribution~(OOD) samples. Previous work usually tries to address one of them and design specific solutions that are not applicable to other ones. Another important but largely neglected fact is that all the previous work focuses only on the discriminative classifiers. This can be explained by the fact that the progresses of image recognition during the past several years is completely due to the discriminative models. Though generative models are believed to be more robust, and demonstrate great success in the relistic synthesis of images, audio etc, they perform poorly in terms of classification tasks. In this thesis, we first explore why fully likelihood-based generative models fail in image classification. Second, we propose an end-to-end generative classifier Supervised Deep Infomax~(SDIM). SDIM models the generative process on the representations, rather than the raw image pixels. SDIM is able to achieve same level accuracy as discriminative ones. With the explicit class conditionals in our hands, we could reject illegal inputs by setting thresholds. Our experiments on adversarial examples and OOD samples show promissing results. Third, instead of training SDIM-based generative classifiers from scratch, we propose SDIM-\emph{logit}, which takes the logits of any discriminative classifier as inputs, and transformes it into a generative one. The training of SDIM-\emph{logit} is very cheap compared to the full training. Based on increasingly powerful well-trained discriminative classifiers, we see improved results on various malicious inputs detection.
DegreeDoctor of Philosophy
SubjectOptical pattern recognition
Computer vision
Image processing
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/301040

 

DC FieldValueLanguage
dc.contributor.advisorYiu, SM-
dc.contributor.authorWang, Xin-
dc.contributor.author王昕-
dc.date.accessioned2021-07-16T14:38:41Z-
dc.date.available2021-07-16T14:38:41Z-
dc.date.issued2020-
dc.identifier.citationWang, X. [王昕]. (2020). Towards robust image recognition via deep generative classifiers. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/301040-
dc.description.abstractRecent years have witnessed the success of deep neural network models in image recognition. Yet at the same time, they are also surprisingly vulnerable to malicious inputs, e.g. adversarial examples, corrupted examples, out-of-distribution~(OOD) samples. Previous work usually tries to address one of them and design specific solutions that are not applicable to other ones. Another important but largely neglected fact is that all the previous work focuses only on the discriminative classifiers. This can be explained by the fact that the progresses of image recognition during the past several years is completely due to the discriminative models. Though generative models are believed to be more robust, and demonstrate great success in the relistic synthesis of images, audio etc, they perform poorly in terms of classification tasks. In this thesis, we first explore why fully likelihood-based generative models fail in image classification. Second, we propose an end-to-end generative classifier Supervised Deep Infomax~(SDIM). SDIM models the generative process on the representations, rather than the raw image pixels. SDIM is able to achieve same level accuracy as discriminative ones. With the explicit class conditionals in our hands, we could reject illegal inputs by setting thresholds. Our experiments on adversarial examples and OOD samples show promissing results. Third, instead of training SDIM-based generative classifiers from scratch, we propose SDIM-\emph{logit}, which takes the logits of any discriminative classifier as inputs, and transformes it into a generative one. The training of SDIM-\emph{logit} is very cheap compared to the full training. Based on increasingly powerful well-trained discriminative classifiers, we see improved results on various malicious inputs detection. -
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshOptical pattern recognition-
dc.subject.lcshComputer vision-
dc.subject.lcshImage processing-
dc.titleTowards robust image recognition via deep generative classifiers-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2021-
dc.identifier.mmsid991044390191303414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats