File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An Error Analysis of Generative Adversarial Networks for Learning Distributions

TitleAn Error Analysis of Generative Adversarial Networks for Learning Distributions
Authors
Keywordsconvergence rate
deep neural networks
error decomposition
Generative adversarial networks
risk bound
Issue Date2022
Citation
Journal of Machine Learning Research, 2022, v. 23 How to Cite?
AbstractThis paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through Holder classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have Holder densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.
Persistent Identifierhttp://hdl.handle.net/10722/363459
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorHuang, Jian-
dc.contributor.authorJiao, Yuling-
dc.contributor.authorLi, Zhen-
dc.contributor.authorLiu, Shiao-
dc.contributor.authorWang, Yang-
dc.contributor.authorYang, Yunfei-
dc.date.accessioned2025-10-10T07:47:01Z-
dc.date.available2025-10-10T07:47:01Z-
dc.date.issued2022-
dc.identifier.citationJournal of Machine Learning Research, 2022, v. 23-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/363459-
dc.description.abstractThis paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through Holder classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have Holder densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectconvergence rate-
dc.subjectdeep neural networks-
dc.subjecterror decomposition-
dc.subjectGenerative adversarial networks-
dc.subjectrisk bound-
dc.titleAn Error Analysis of Generative Adversarial Networks for Learning Distributions-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85130354273-
dc.identifier.volume23-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats