File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An adaptive Hessian approximated stochastic gradient MCMC method

TitleAn adaptive Hessian approximated stochastic gradient MCMC method
Authors
KeywordsStochastic approximation
Limited memory BFGS
Deep learning
Highly correlated density
Hessian approximated stochastic gradient MCMC
Adaptive Bayesian method
Issue Date2021
Citation
Journal of Computational Physics, 2021, v. 432, article no. 110150 How to Cite?
AbstractBayesian approaches have been successfully integrated into training deep neural networks. One popular family is stochastic gradient Markov chain Monte Carlo methods (SG-MCMC), which have gained increasing interest due to their ability to handle large datasets and the potential to avoid overfitting. Although standard SG-MCMC methods have shown great performance in a variety of problems, they may be inefficient when the random variables in the target posterior densities have scale differences or are highly correlated. In this work, we present an adaptive Hessian approximated stochastic gradient MCMC method to incorporate local geometric information while sampling from the posterior. The idea is to apply stochastic approximation (SA) to sequentially update a preconditioning matrix at each iteration. The preconditioner possesses second-order information and can guide the random walk of a sampler efficiently. Instead of computing and saving the full Hessian of the log posterior, we use limited memory of the samples and their stochastic gradients to approximate the inverse Hessian-vector multiplication in the updating formula. Moreover, by smoothly optimizing the preconditioning matrix via SA, our proposed algorithm can asymptotically converge to the target distribution with a controllable bias under mild conditions. To reduce the training and testing computational burden, we adopt a magnitude-based weight pruning method to enforce the sparsity of the network. Our method is user-friendly and demonstrates better learning results compared to standard SG-MCMC updating rules. The approximation of inverse Hessian alleviates storage and computational complexities for large dimensional models. Numerical experiments are performed on several problems, including sampling from 2D correlated distribution, synthetic regression problems, and learning the numerical solutions of heterogeneous elliptic PDE. The numerical results demonstrate great improvement in both the convergence rate and accuracy.
Persistent Identifierhttp://hdl.handle.net/10722/303731
ISSN
2023 Impact Factor: 3.8
2023 SCImago Journal Rankings: 1.679
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Yating-
dc.contributor.authorDeng, Wei-
dc.contributor.authorLin, Guang-
dc.date.accessioned2021-09-15T08:25:54Z-
dc.date.available2021-09-15T08:25:54Z-
dc.date.issued2021-
dc.identifier.citationJournal of Computational Physics, 2021, v. 432, article no. 110150-
dc.identifier.issn0021-9991-
dc.identifier.urihttp://hdl.handle.net/10722/303731-
dc.description.abstractBayesian approaches have been successfully integrated into training deep neural networks. One popular family is stochastic gradient Markov chain Monte Carlo methods (SG-MCMC), which have gained increasing interest due to their ability to handle large datasets and the potential to avoid overfitting. Although standard SG-MCMC methods have shown great performance in a variety of problems, they may be inefficient when the random variables in the target posterior densities have scale differences or are highly correlated. In this work, we present an adaptive Hessian approximated stochastic gradient MCMC method to incorporate local geometric information while sampling from the posterior. The idea is to apply stochastic approximation (SA) to sequentially update a preconditioning matrix at each iteration. The preconditioner possesses second-order information and can guide the random walk of a sampler efficiently. Instead of computing and saving the full Hessian of the log posterior, we use limited memory of the samples and their stochastic gradients to approximate the inverse Hessian-vector multiplication in the updating formula. Moreover, by smoothly optimizing the preconditioning matrix via SA, our proposed algorithm can asymptotically converge to the target distribution with a controllable bias under mild conditions. To reduce the training and testing computational burden, we adopt a magnitude-based weight pruning method to enforce the sparsity of the network. Our method is user-friendly and demonstrates better learning results compared to standard SG-MCMC updating rules. The approximation of inverse Hessian alleviates storage and computational complexities for large dimensional models. Numerical experiments are performed on several problems, including sampling from 2D correlated distribution, synthetic regression problems, and learning the numerical solutions of heterogeneous elliptic PDE. The numerical results demonstrate great improvement in both the convergence rate and accuracy.-
dc.languageeng-
dc.relation.ispartofJournal of Computational Physics-
dc.subjectStochastic approximation-
dc.subjectLimited memory BFGS-
dc.subjectDeep learning-
dc.subjectHighly correlated density-
dc.subjectHessian approximated stochastic gradient MCMC-
dc.subjectAdaptive Bayesian method-
dc.titleAn adaptive Hessian approximated stochastic gradient MCMC method-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.jcp.2021.110150-
dc.identifier.scopuseid_2-s2.0-85100415471-
dc.identifier.volume432-
dc.identifier.spagearticle no. 110150-
dc.identifier.epagearticle no. 110150-
dc.identifier.eissn1090-2716-
dc.identifier.isiWOS:000636580800004-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats