File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: An Error Analysis of Generative Adversarial Networks for Learning Distributions
| Title | An Error Analysis of Generative Adversarial Networks for Learning Distributions |
|---|---|
| Authors | |
| Keywords | convergence rate deep neural networks error decomposition Generative adversarial networks risk bound |
| Issue Date | 2022 |
| Citation | Journal of Machine Learning Research, 2022, v. 23 How to Cite? |
| Abstract | This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through Holder classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have Holder densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest. |
| Persistent Identifier | http://hdl.handle.net/10722/363459 |
| ISSN | 2023 Impact Factor: 4.3 2023 SCImago Journal Rankings: 2.796 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Huang, Jian | - |
| dc.contributor.author | Jiao, Yuling | - |
| dc.contributor.author | Li, Zhen | - |
| dc.contributor.author | Liu, Shiao | - |
| dc.contributor.author | Wang, Yang | - |
| dc.contributor.author | Yang, Yunfei | - |
| dc.date.accessioned | 2025-10-10T07:47:01Z | - |
| dc.date.available | 2025-10-10T07:47:01Z | - |
| dc.date.issued | 2022 | - |
| dc.identifier.citation | Journal of Machine Learning Research, 2022, v. 23 | - |
| dc.identifier.issn | 1532-4435 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363459 | - |
| dc.description.abstract | This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through Holder classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have Holder densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Journal of Machine Learning Research | - |
| dc.subject | convergence rate | - |
| dc.subject | deep neural networks | - |
| dc.subject | error decomposition | - |
| dc.subject | Generative adversarial networks | - |
| dc.subject | risk bound | - |
| dc.title | An Error Analysis of Generative Adversarial Networks for Learning Distributions | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.scopus | eid_2-s2.0-85130354273 | - |
| dc.identifier.volume | 23 | - |
| dc.identifier.eissn | 1533-7928 | - |

