File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2025.3572766
- Scopus: eid_2-s2.0-105005961483
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Feature Preserving Shrinkage on Bayesian Neural Networks via the R2D2 Prior
| Title | Feature Preserving Shrinkage on Bayesian Neural Networks via the R2D2 Prior |
|---|---|
| Authors | |
| Keywords | Bayesian Neural Network Medical Image Analysis Shrinkage Priors Uncertainty Estimation Variational Inference |
| Issue Date | 1-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 9, p. 7987-8000 How to Cite? |
| Abstract | Bayesian neural networks (BNNs) treat neural network weights as random variables, which aim to provide posterior uncertainty estimates and avoid overfitting by performing inference on the posterior weights. However, selection of appropriate prior distributions remains a challenging task, and BNNs may suffer from catastrophic inflated variance or poor predictive performance when poor choices are made for the priors. Existing BNN designs apply different priors to weights, while the behaviours of these priors make it difficult to sufficiently shrink noisy signals or they are prone to overshrinking important signals in the weights. To alleviate this problem, we propose a novel R2D2-Net, which imposes the R2-induced Dirichlet Decomposition (R2D2) prior to the BNN weights. The R2D2-Net can effectively shrink irrelevant coefficients towards zero, while preventing key features from over-shrinkage. To approximate the posterior distribution of weights more accurately, we further propose a variational Gibbs inference algorithm that combines the Gibbs updating procedure and gradient-based optimization. This strategy enhances stability and consistency in estimation when the variational objective involving the shrinkage parameters is non-convex. We also analyze the evidence lower bound (ELBO) and the posterior concentration rates from a theoretical perspective. Experiments on both natural and medical image classification and uncertainty estimation tasks demonstrate satisfactory performances of our method. |
| Persistent Identifier | http://hdl.handle.net/10722/361917 |
| ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chan, Tsai Hor | - |
| dc.contributor.author | Zhang, Dora Yan | - |
| dc.contributor.author | Yin, Guosheng | - |
| dc.contributor.author | Yu, Lequan | - |
| dc.date.accessioned | 2025-09-17T00:32:01Z | - |
| dc.date.available | 2025-09-17T00:32:01Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 9, p. 7987-8000 | - |
| dc.identifier.issn | 0162-8828 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361917 | - |
| dc.description.abstract | Bayesian neural networks (BNNs) treat neural network weights as random variables, which aim to provide posterior uncertainty estimates and avoid overfitting by performing inference on the posterior weights. However, selection of appropriate prior distributions remains a challenging task, and BNNs may suffer from catastrophic inflated variance or poor predictive performance when poor choices are made for the priors. Existing BNN designs apply different priors to weights, while the behaviours of these priors make it difficult to sufficiently shrink noisy signals or they are prone to overshrinking important signals in the weights. To alleviate this problem, we propose a novel R2D2-Net, which imposes the R<sup>2</sup>-induced Dirichlet Decomposition (R2D2) prior to the BNN weights. The R2D2-Net can effectively shrink irrelevant coefficients towards zero, while preventing key features from over-shrinkage. To approximate the posterior distribution of weights more accurately, we further propose a variational Gibbs inference algorithm that combines the Gibbs updating procedure and gradient-based optimization. This strategy enhances stability and consistency in estimation when the variational objective involving the shrinkage parameters is non-convex. We also analyze the evidence lower bound (ELBO) and the posterior concentration rates from a theoretical perspective. Experiments on both natural and medical image classification and uncertainty estimation tasks demonstrate satisfactory performances of our method. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
| dc.subject | Bayesian Neural Network | - |
| dc.subject | Medical Image Analysis | - |
| dc.subject | Shrinkage Priors | - |
| dc.subject | Uncertainty Estimation | - |
| dc.subject | Variational Inference | - |
| dc.title | Feature Preserving Shrinkage on Bayesian Neural Networks via the R2D2 Prior | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TPAMI.2025.3572766 | - |
| dc.identifier.scopus | eid_2-s2.0-105005961483 | - |
| dc.identifier.volume | 47 | - |
| dc.identifier.issue | 9 | - |
| dc.identifier.spage | 7987 | - |
| dc.identifier.epage | 8000 | - |
| dc.identifier.eissn | 1939-3539 | - |
| dc.identifier.issnl | 0162-8828 | - |
