File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction

TitleCTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction
Authors
Keywordsclosed-loop transcription
linear discriminative representation
minimax game
rate reduction
Issue Date2022
Citation
Entropy, 2022, v. 24, n. 4, article no. 456 How to Cite?
AbstractThis work proposes a new computational framework for learning a structured generative model for real-world datasets. In particular, we propose to learn a Closed-loop Transcriptionbetween a multi-class, multi-dimensional data distribution and a Linear discriminative representation (CTRL) in the feature space that consists of multiple independent multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoderfor the learned representation. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation draws inspiration from closed-loop error feedback from control systems and avoids expensive evaluating and minimizing of approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, this new formulation unifies the concepts and benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and arguably better than existing methods based on GAN, VAE, or a combination of both. Unlike existing generative models, the so-learned features of the multiple classes are structured instead of hidden: different classes are explicitly mapped onto corresponding independent principal subspaces in the feature space, and diverse visual attributes within each class are modeled by the independent principal components within each subspace.
Persistent Identifierhttp://hdl.handle.net/10722/327782
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDai, Xili-
dc.contributor.authorTong, Shengbang-
dc.contributor.authorLi, Mingyang-
dc.contributor.authorWu, Ziyang-
dc.contributor.authorPsenka, Michael-
dc.contributor.authorChan, Kwan Ho Ryan-
dc.contributor.authorZhai, Pengyuan-
dc.contributor.authorYu, Yaodong-
dc.contributor.authorYuan, Xiaojun-
dc.contributor.authorShum, Heung Yeung-
dc.contributor.authorMa, Yi-
dc.date.accessioned2023-05-08T02:26:46Z-
dc.date.available2023-05-08T02:26:46Z-
dc.date.issued2022-
dc.identifier.citationEntropy, 2022, v. 24, n. 4, article no. 456-
dc.identifier.urihttp://hdl.handle.net/10722/327782-
dc.description.abstractThis work proposes a new computational framework for learning a structured generative model for real-world datasets. In particular, we propose to learn a Closed-loop Transcriptionbetween a multi-class, multi-dimensional data distribution and a Linear discriminative representation (CTRL) in the feature space that consists of multiple independent multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoderfor the learned representation. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation draws inspiration from closed-loop error feedback from control systems and avoids expensive evaluating and minimizing of approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, this new formulation unifies the concepts and benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and arguably better than existing methods based on GAN, VAE, or a combination of both. Unlike existing generative models, the so-learned features of the multiple classes are structured instead of hidden: different classes are explicitly mapped onto corresponding independent principal subspaces in the feature space, and diverse visual attributes within each class are modeled by the independent principal components within each subspace.-
dc.languageeng-
dc.relation.ispartofEntropy-
dc.subjectclosed-loop transcription-
dc.subjectlinear discriminative representation-
dc.subjectminimax game-
dc.subjectrate reduction-
dc.titleCTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.3390/e24040456-
dc.identifier.scopuseid_2-s2.0-85127901075-
dc.identifier.volume24-
dc.identifier.issue4-
dc.identifier.spagearticle no. 456-
dc.identifier.epagearticle no. 456-
dc.identifier.eissn1099-4300-
dc.identifier.isiWOS:000786991200001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats