File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Scopus: eid_2-s2.0-85071155279
- WOS: WOS:000509687902101
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Sampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics
Title | Sampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics |
---|---|
Authors | |
Issue Date | 2020 |
Citation | AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics, 2020 How to Cite? |
Abstract | We study stochastic variance reduction-based Langevin dynamic algorithms, SVRG-LD and SAGA-LD (Dubey et al., 2016), for sampling from non-log-concave distributions. Under certain assumptions on the log density function, we establish the convergence guarantees of SVRG-LD and SAGA-LD in 2-Wasserstein distance. More specifically, we show that both SVRG-LD and SAGA-LD require Õ(n+n3/4/ε2+n1/2/ε4) -exp (Õ(d+γ)) stochastic gradient evaluations to achieve e-accuracy in 2-Wasserstein distance, which outperforms the Õ(n/ε4) exp (Õ(d + γ)) gradient complexity achieved by Langevin Monte Carlo Method (Raginsky et al., 2017). Experiments on synthetic data and real data back up our theory. |
Persistent Identifier | http://hdl.handle.net/10722/316526 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zou, Difan | - |
dc.contributor.author | Xu, Pan | - |
dc.contributor.author | Gu, Quanquan | - |
dc.date.accessioned | 2022-09-14T11:40:40Z | - |
dc.date.available | 2022-09-14T11:40:40Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics, 2020 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316526 | - |
dc.description.abstract | We study stochastic variance reduction-based Langevin dynamic algorithms, SVRG-LD and SAGA-LD (Dubey et al., 2016), for sampling from non-log-concave distributions. Under certain assumptions on the log density function, we establish the convergence guarantees of SVRG-LD and SAGA-LD in 2-Wasserstein distance. More specifically, we show that both SVRG-LD and SAGA-LD require Õ(n+n3/4/ε2+n1/2/ε4) -exp (Õ(d+γ)) stochastic gradient evaluations to achieve e-accuracy in 2-Wasserstein distance, which outperforms the Õ(n/ε4) exp (Õ(d + γ)) gradient complexity achieved by Langevin Monte Carlo Method (Raginsky et al., 2017). Experiments on synthetic data and real data back up our theory. | - |
dc.language | eng | - |
dc.relation.ispartof | AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics | - |
dc.title | Sampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85071155279 | - |
dc.identifier.isi | WOS:000509687902101 | - |