File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Computational image reconstruction via data-driven and model-based algorithms
Title | Computational image reconstruction via data-driven and model-based algorithms |
---|---|
Authors | |
Advisors | |
Issue Date | 2021 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Zeng, T. [曾天娇]. (2021). Computational image reconstruction via data-driven and model-based algorithms. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | Computational imaging (CI) refers to the joint design of physical imaging systems and computational methods for post-image processing based on prior imperfect recording of scenes. This integrated paradigm aims at improving the imaging performance or extracting implicit information from the captured raw measurements via computation. By introducing an extra computational step to the imaging, the burden brought by unwieldy and expensive hardwares, physical limits on optics, or stringent requirements of the experimental process, can be alleviated and transferred to computation, yielding various new designs of instruments with reduced restrictions on size, weight, cost, fabrications or acquisition environment.
Image reconstruction has always been an essential task in CI studies as the raw intensity images are usually not ready for direct perception or non-interpretable with no explicit fidelity to the imaged scene. In general, computational image reconstruction includes a series of processing problems like denoising, deconvolution, aberrations correction and so on. The traditional model-based algorithms, relying on the explicit knowledge of the image formation process, often suffer from various artifacts bought by hand-picked parameters, experimental errors, imprecise approximation of physical models, etc. This dissertation is in pursuit of handling computational image reconstruction problems using learning-based algorithms and also hybrid architectures incorporating both model-based and data-driven methods.
To reconstruct clean images from measurements captured via coherent imaging, it is especially critical to suppress the speckle noise, caused by the scattering of coherent light sources after irradiating rough surfaces. By combining nonlocal self-similarity (NSS) filters with machine learning, which makes use of convolutional neural network (CNN) denoisers, a modular framework is developed to tackle the reconstruction of images corrupted with speckles. Besides noticeable improvements in experimental results, the plug-and-play design enables the direct usage of different models without re-training, making the proposed framework more adaptable.
Model mismatch is also a common problem that leads to inferior image reconstruction results. In mask-based lensless imaging, the attributing factors to model mismatch error are quite complicated and diverse, varying from occlusion to sensor imperfections, from large impinging angles to specular objects. To overcome this, a novel physics-informed deep learning architecture with focus on correcting such errors is developed. The proposed hybrid reconstruction network consists of both unrolled model-based optimization to apply system physics and deep learning layers for model correction. Experimental results have demonstrated the effectiveness and robustness of the proposed architecture.
As an emerging technique in deep learning, the capsule network is designed to overcome information loss in the pooling operation and internal data representation of CNNs. It has shown promising results in several applications, such as digit recognition and image segmentation. Thus, we investigate for the first time the use of a capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block. Compared with the CNN-based network, RedCap exhibits much better experimental results while having a dramatic 75% reduction in the number of parameters, indicating that RedCap is more efficient in the way it processes data.
|
Degree | Doctor of Philosophy |
Subject | Image processing - Digital techniques Image reconstruction |
Dept/Program | Electrical and Electronic Engineering |
Persistent Identifier | http://hdl.handle.net/10722/308592 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Lam, EYM | - |
dc.contributor.advisor | So, HKH | - |
dc.contributor.author | Zeng, Tianjiao | - |
dc.contributor.author | 曾天娇 | - |
dc.date.accessioned | 2021-12-06T01:03:57Z | - |
dc.date.available | 2021-12-06T01:03:57Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Zeng, T. [曾天娇]. (2021). Computational image reconstruction via data-driven and model-based algorithms. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/308592 | - |
dc.description.abstract | Computational imaging (CI) refers to the joint design of physical imaging systems and computational methods for post-image processing based on prior imperfect recording of scenes. This integrated paradigm aims at improving the imaging performance or extracting implicit information from the captured raw measurements via computation. By introducing an extra computational step to the imaging, the burden brought by unwieldy and expensive hardwares, physical limits on optics, or stringent requirements of the experimental process, can be alleviated and transferred to computation, yielding various new designs of instruments with reduced restrictions on size, weight, cost, fabrications or acquisition environment. Image reconstruction has always been an essential task in CI studies as the raw intensity images are usually not ready for direct perception or non-interpretable with no explicit fidelity to the imaged scene. In general, computational image reconstruction includes a series of processing problems like denoising, deconvolution, aberrations correction and so on. The traditional model-based algorithms, relying on the explicit knowledge of the image formation process, often suffer from various artifacts bought by hand-picked parameters, experimental errors, imprecise approximation of physical models, etc. This dissertation is in pursuit of handling computational image reconstruction problems using learning-based algorithms and also hybrid architectures incorporating both model-based and data-driven methods. To reconstruct clean images from measurements captured via coherent imaging, it is especially critical to suppress the speckle noise, caused by the scattering of coherent light sources after irradiating rough surfaces. By combining nonlocal self-similarity (NSS) filters with machine learning, which makes use of convolutional neural network (CNN) denoisers, a modular framework is developed to tackle the reconstruction of images corrupted with speckles. Besides noticeable improvements in experimental results, the plug-and-play design enables the direct usage of different models without re-training, making the proposed framework more adaptable. Model mismatch is also a common problem that leads to inferior image reconstruction results. In mask-based lensless imaging, the attributing factors to model mismatch error are quite complicated and diverse, varying from occlusion to sensor imperfections, from large impinging angles to specular objects. To overcome this, a novel physics-informed deep learning architecture with focus on correcting such errors is developed. The proposed hybrid reconstruction network consists of both unrolled model-based optimization to apply system physics and deep learning layers for model correction. Experimental results have demonstrated the effectiveness and robustness of the proposed architecture. As an emerging technique in deep learning, the capsule network is designed to overcome information loss in the pooling operation and internal data representation of CNNs. It has shown promising results in several applications, such as digit recognition and image segmentation. Thus, we investigate for the first time the use of a capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block. Compared with the CNN-based network, RedCap exhibits much better experimental results while having a dramatic 75% reduction in the number of parameters, indicating that RedCap is more efficient in the way it processes data. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Image processing - Digital techniques | - |
dc.subject.lcsh | Image reconstruction | - |
dc.title | Computational image reconstruction via data-driven and model-based algorithms | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Electrical and Electronic Engineering | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2021 | - |
dc.identifier.mmsid | 991044448912203414 | - |