File Download
Supplementary

postgraduate thesis: Computational image reconstruction via data-driven and model-based algorithms

TitleComputational image reconstruction via data-driven and model-based algorithms
Authors
Advisors
Advisor(s):Lam, EYMSo, HKH
Issue Date2021
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Zeng, T. [曾天娇]. (2021). Computational image reconstruction via data-driven and model-based algorithms. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractComputational imaging (CI) refers to the joint design of physical imaging systems and computational methods for post-image processing based on prior imperfect recording of scenes. This integrated paradigm aims at improving the imaging performance or extracting implicit information from the captured raw measurements via computation. By introducing an extra computational step to the imaging, the burden brought by unwieldy and expensive hardwares, physical limits on optics, or stringent requirements of the experimental process, can be alleviated and transferred to computation, yielding various new designs of instruments with reduced restrictions on size, weight, cost, fabrications or acquisition environment. Image reconstruction has always been an essential task in CI studies as the raw intensity images are usually not ready for direct perception or non-interpretable with no explicit fidelity to the imaged scene. In general, computational image reconstruction includes a series of processing problems like denoising, deconvolution, aberrations correction and so on. The traditional model-based algorithms, relying on the explicit knowledge of the image formation process, often suffer from various artifacts bought by hand-picked parameters, experimental errors, imprecise approximation of physical models, etc. This dissertation is in pursuit of handling computational image reconstruction problems using learning-based algorithms and also hybrid architectures incorporating both model-based and data-driven methods. To reconstruct clean images from measurements captured via coherent imaging, it is especially critical to suppress the speckle noise, caused by the scattering of coherent light sources after irradiating rough surfaces. By combining nonlocal self-similarity (NSS) filters with machine learning, which makes use of convolutional neural network (CNN) denoisers, a modular framework is developed to tackle the reconstruction of images corrupted with speckles. Besides noticeable improvements in experimental results, the plug-and-play design enables the direct usage of different models without re-training, making the proposed framework more adaptable. Model mismatch is also a common problem that leads to inferior image reconstruction results. In mask-based lensless imaging, the attributing factors to model mismatch error are quite complicated and diverse, varying from occlusion to sensor imperfections, from large impinging angles to specular objects. To overcome this, a novel physics-informed deep learning architecture with focus on correcting such errors is developed. The proposed hybrid reconstruction network consists of both unrolled model-based optimization to apply system physics and deep learning layers for model correction. Experimental results have demonstrated the effectiveness and robustness of the proposed architecture. As an emerging technique in deep learning, the capsule network is designed to overcome information loss in the pooling operation and internal data representation of CNNs. It has shown promising results in several applications, such as digit recognition and image segmentation. Thus, we investigate for the first time the use of a capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block. Compared with the CNN-based network, RedCap exhibits much better experimental results while having a dramatic 75% reduction in the number of parameters, indicating that RedCap is more efficient in the way it processes data.
DegreeDoctor of Philosophy
SubjectImage processing - Digital techniques
Image reconstruction
Dept/ProgramElectrical and Electronic Engineering
Persistent Identifierhttp://hdl.handle.net/10722/308592

 

DC FieldValueLanguage
dc.contributor.advisorLam, EYM-
dc.contributor.advisorSo, HKH-
dc.contributor.authorZeng, Tianjiao-
dc.contributor.author曾天娇-
dc.date.accessioned2021-12-06T01:03:57Z-
dc.date.available2021-12-06T01:03:57Z-
dc.date.issued2021-
dc.identifier.citationZeng, T. [曾天娇]. (2021). Computational image reconstruction via data-driven and model-based algorithms. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/308592-
dc.description.abstractComputational imaging (CI) refers to the joint design of physical imaging systems and computational methods for post-image processing based on prior imperfect recording of scenes. This integrated paradigm aims at improving the imaging performance or extracting implicit information from the captured raw measurements via computation. By introducing an extra computational step to the imaging, the burden brought by unwieldy and expensive hardwares, physical limits on optics, or stringent requirements of the experimental process, can be alleviated and transferred to computation, yielding various new designs of instruments with reduced restrictions on size, weight, cost, fabrications or acquisition environment. Image reconstruction has always been an essential task in CI studies as the raw intensity images are usually not ready for direct perception or non-interpretable with no explicit fidelity to the imaged scene. In general, computational image reconstruction includes a series of processing problems like denoising, deconvolution, aberrations correction and so on. The traditional model-based algorithms, relying on the explicit knowledge of the image formation process, often suffer from various artifacts bought by hand-picked parameters, experimental errors, imprecise approximation of physical models, etc. This dissertation is in pursuit of handling computational image reconstruction problems using learning-based algorithms and also hybrid architectures incorporating both model-based and data-driven methods. To reconstruct clean images from measurements captured via coherent imaging, it is especially critical to suppress the speckle noise, caused by the scattering of coherent light sources after irradiating rough surfaces. By combining nonlocal self-similarity (NSS) filters with machine learning, which makes use of convolutional neural network (CNN) denoisers, a modular framework is developed to tackle the reconstruction of images corrupted with speckles. Besides noticeable improvements in experimental results, the plug-and-play design enables the direct usage of different models without re-training, making the proposed framework more adaptable. Model mismatch is also a common problem that leads to inferior image reconstruction results. In mask-based lensless imaging, the attributing factors to model mismatch error are quite complicated and diverse, varying from occlusion to sensor imperfections, from large impinging angles to specular objects. To overcome this, a novel physics-informed deep learning architecture with focus on correcting such errors is developed. The proposed hybrid reconstruction network consists of both unrolled model-based optimization to apply system physics and deep learning layers for model correction. Experimental results have demonstrated the effectiveness and robustness of the proposed architecture. As an emerging technique in deep learning, the capsule network is designed to overcome information loss in the pooling operation and internal data representation of CNNs. It has shown promising results in several applications, such as digit recognition and image segmentation. Thus, we investigate for the first time the use of a capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block. Compared with the CNN-based network, RedCap exhibits much better experimental results while having a dramatic 75% reduction in the number of parameters, indicating that RedCap is more efficient in the way it processes data. -
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshImage processing - Digital techniques-
dc.subject.lcshImage reconstruction-
dc.titleComputational image reconstruction via data-driven and model-based algorithms-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineElectrical and Electronic Engineering-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2021-
dc.identifier.mmsid991044448912203414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats