File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Probabilistic tensor subspace learning : foundations and innovations
Title | Probabilistic tensor subspace learning : foundations and innovations |
---|---|
Authors | |
Advisors | Advisor(s):Wu, YC |
Issue Date | 2017 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Cheng, L. [程磊]. (2017). Probabilistic tensor subspace learning : foundations and innovations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | This world is full of data, and these data often appear in high-dimensional structures, in which each object is described by multiple attributes. To make sense of the multi-dimensional data, advanced computational tools are needed to figure out the hidden patterns underlying the data. This is where tensor sub- space learning comes into play. Tensor subspace learning is an emerging topic that studies the extraction of low dimensional but yet fundamental information from multi-dimensional data. Due to the advances in algorithms for tensor sub- space learning in the past decade, and its proven superior performance to the traditional matrix-based counterpart, tensor subspace learning finds many ap- plications in diverse fields of engineering, including (but not limited to) wireless communications, image and video signal processing.
However, most current research overlooks an important problem: the tensor rank determination. Tensor rank refers to the subspace dimension of the tensor. In some cases, it can be obtained from problem-specific domain knowledge, but most of the time, it is unknown and must be estimated. As it has been shown that tensor rank is non-deterministic polynomial-time hard (NP-hard) to acquire from tensor data, a dominant approach in using existing tensor subspace learning algorithms is to run multiple parallel algorithms assuming different ranks, and then choose the model with the smallest rank that fits the data well. Although this trial-and-error approach is widely accepted in tensor research community, it inevitably leads to heavy computation burden.
To tackle this problem, tensor subspace learning is investigated in this thesis from a probabilistic perspective. The new perspective enjoys the advantage that tensor rank determination can be fully integrated as part of the subspace learn- ing problem, and the Bayes’ rule provides a natural recipe for automatic rank determination. Besides this, a number of other innovative features have been incorporated into the proposed probabilistic tensor subspace learning algorithm, including the ability to handle complex-valued data, mitigate outliers in measure- ments, and dealing with orthogonal constraints in factor matrices for performance improvement. Numerical studies in a variety of applications have demonstrated the effectiveness of the proposed algorithm in terms of accuracy and robustness.
While the probabilistic tensor subspace learning developed in the first part of this thesis promises a bright future, it is designed for batch-mode operation, meaning that the algorithm requires the gathering of whole data set before it starts. This makes it no longer competent in modern big data era. To devise a scalable probabilistic tensor subspace learning algorithm, we firstly proposes a general class of probabilistic model termed the multilayer partial conjugate ex- ponential family (MPCEF) model. It is revealed that, under the MPCEF model, the updates of the inference algorithm are all in closed-forms with exponential family parameterization. This not only greatly simplifies the algorithm deriva- tions for any application falling into this general family of probabilistic model, but also paves the way to bridge the probabilistic inference and the stochastic optimization. By introducing idea of the stochastic optimization, a novel scalable tensor subspace learning algorithm is developed in the second part of the thesis. |
Degree | Doctor of Philosophy |
Subject | Computer algorithms Machine learning |
Dept/Program | Electrical and Electronic Engineering |
Persistent Identifier | http://hdl.handle.net/10722/265341 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Wu, YC | - |
dc.contributor.author | Cheng, Lei | - |
dc.contributor.author | 程磊 | - |
dc.date.accessioned | 2018-11-29T06:22:20Z | - |
dc.date.available | 2018-11-29T06:22:20Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Cheng, L. [程磊]. (2017). Probabilistic tensor subspace learning : foundations and innovations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/265341 | - |
dc.description.abstract | This world is full of data, and these data often appear in high-dimensional structures, in which each object is described by multiple attributes. To make sense of the multi-dimensional data, advanced computational tools are needed to figure out the hidden patterns underlying the data. This is where tensor sub- space learning comes into play. Tensor subspace learning is an emerging topic that studies the extraction of low dimensional but yet fundamental information from multi-dimensional data. Due to the advances in algorithms for tensor sub- space learning in the past decade, and its proven superior performance to the traditional matrix-based counterpart, tensor subspace learning finds many ap- plications in diverse fields of engineering, including (but not limited to) wireless communications, image and video signal processing. However, most current research overlooks an important problem: the tensor rank determination. Tensor rank refers to the subspace dimension of the tensor. In some cases, it can be obtained from problem-specific domain knowledge, but most of the time, it is unknown and must be estimated. As it has been shown that tensor rank is non-deterministic polynomial-time hard (NP-hard) to acquire from tensor data, a dominant approach in using existing tensor subspace learning algorithms is to run multiple parallel algorithms assuming different ranks, and then choose the model with the smallest rank that fits the data well. Although this trial-and-error approach is widely accepted in tensor research community, it inevitably leads to heavy computation burden. To tackle this problem, tensor subspace learning is investigated in this thesis from a probabilistic perspective. The new perspective enjoys the advantage that tensor rank determination can be fully integrated as part of the subspace learn- ing problem, and the Bayes’ rule provides a natural recipe for automatic rank determination. Besides this, a number of other innovative features have been incorporated into the proposed probabilistic tensor subspace learning algorithm, including the ability to handle complex-valued data, mitigate outliers in measure- ments, and dealing with orthogonal constraints in factor matrices for performance improvement. Numerical studies in a variety of applications have demonstrated the effectiveness of the proposed algorithm in terms of accuracy and robustness. While the probabilistic tensor subspace learning developed in the first part of this thesis promises a bright future, it is designed for batch-mode operation, meaning that the algorithm requires the gathering of whole data set before it starts. This makes it no longer competent in modern big data era. To devise a scalable probabilistic tensor subspace learning algorithm, we firstly proposes a general class of probabilistic model termed the multilayer partial conjugate ex- ponential family (MPCEF) model. It is revealed that, under the MPCEF model, the updates of the inference algorithm are all in closed-forms with exponential family parameterization. This not only greatly simplifies the algorithm deriva- tions for any application falling into this general family of probabilistic model, but also paves the way to bridge the probabilistic inference and the stochastic optimization. By introducing idea of the stochastic optimization, a novel scalable tensor subspace learning algorithm is developed in the second part of the thesis. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Computer algorithms | - |
dc.subject.lcsh | Machine learning | - |
dc.title | Probabilistic tensor subspace learning : foundations and innovations | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Electrical and Electronic Engineering | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.5353/th_991044014366703414 | - |
dc.date.hkucongregation | 2018 | - |
dc.identifier.mmsid | 991044014366703414 | - |