File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Scalable Random Feature Latent Variable Models

TitleScalable Random Feature Latent Variable Models
Authors
KeywordsDirichlet process
Gaussian process
Latent variable models
random Fourier feature
variational inference
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025 How to Cite?
Abstract

Random feature latent variable models (RFLVMs) are state-of-the-art tools for uncovering structure in high-dimensional, non-Gaussian data. However, their reliance on Monte Carlo sampling significantly limits scalability, posing challenges for large-scale applications. To overcome these limitations, we develop a scalable RFLVM framework based on variational Bayesian inference (VBI), a deterministic and optimization-based alternative to sampling methods. Applying VBI to RFLVMs is nontrivial due to two key challenges: (i) the lack of an explicit probability density function (PDF) for Dirichlet process (DP) mixing weights, and (ii) the inefficiency of existing VBI approaches when handling the high-dimensional variational parameters of RFLVMs. To address these issues, we adopt the stick-breaking construction for the DP, which provides an explicit and tractable PDF over mixing weights, and propose a novel inference algorithm, block coordinate descent variational inference (BCD-VI), which partitions variational parameters into blocks and applies tailored solvers to optimize them efficiently. The resulting scalable model, referred to as SRFLVM, supports various likelihoods; we demonstrate its effectiveness under Gaussian and logistic settings. Extensive experiments on diverse benchmark datasets show that SRFLVM achieves superior scalability, computational efficiency, and performance in latent representation learning and missing data imputation, consistently outperforming state-of-the-art latent variable models, including deep generative approaches.


Persistent Identifierhttp://hdl.handle.net/10722/362652
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorLi, Ying-
dc.contributor.authorLin, Zhidi-
dc.contributor.authorLiu, Yuhao-
dc.contributor.authorZhang, Michael Minyi-
dc.contributor.authorOlmos, Pablo M.-
dc.contributor.authorDjuric, Petar M.-
dc.date.accessioned2025-09-26T00:36:45Z-
dc.date.available2025-09-26T00:36:45Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2025-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/362652-
dc.description.abstract<p>Random feature latent variable models (RFLVMs) are state-of-the-art tools for uncovering structure in high-dimensional, non-Gaussian data. However, their reliance on Monte Carlo sampling significantly limits scalability, posing challenges for large-scale applications. To overcome these limitations, we develop a scalable RFLVM framework based on variational Bayesian inference (VBI), a deterministic and optimization-based alternative to sampling methods. Applying VBI to RFLVMs is nontrivial due to two key challenges: (i) the lack of an explicit probability density function (PDF) for Dirichlet process (DP) mixing weights, and (ii) the inefficiency of existing VBI approaches when handling the high-dimensional variational parameters of RFLVMs. To address these issues, we adopt the stick-breaking construction for the DP, which provides an explicit and tractable PDF over mixing weights, and propose a novel inference algorithm, block coordinate descent variational inference (BCD-VI), which partitions variational parameters into blocks and applies tailored solvers to optimize them efficiently. The resulting scalable model, referred to as SRFLVM, supports various likelihoods; we demonstrate its effectiveness under Gaussian and logistic settings. Extensive experiments on diverse benchmark datasets show that SRFLVM achieves superior scalability, computational efficiency, and performance in latent representation learning and missing data imputation, consistently outperforming state-of-the-art latent variable models, including deep generative approaches.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDirichlet process-
dc.subjectGaussian process-
dc.subjectLatent variable models-
dc.subjectrandom Fourier feature-
dc.subjectvariational inference-
dc.titleScalable Random Feature Latent Variable Models-
dc.typeArticle-
dc.identifier.doi10.1109/TPAMI.2025.3589728-
dc.identifier.scopuseid_2-s2.0-105011143824-
dc.identifier.eissn1939-3539-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats