Herzlich Willkommen am
Mathematischen Institut
- Intranet -


Termin: Detail

Pavel Izmailov (New York University): What Are Bayesian Neural Network Posteriors Really Like?

Ort: MPI für Mathematik in den Naturwissenschaften Leipzig, Videobroadcast

Video broadcast: Math Machine Learning seminar MPI MIS + UCLA The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; we demonstrate, explain and provide remedies for this effect; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference. \emph{References:} \linebreak I will talk about two of our recent papers: \linebreak [1] What Are Bayesian Neural Network Posteriors Really Like?\ Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, Andrew Gordon Wilson \linebreak [2] Dangers of Bayesian Model Averaging under Covariate Shift \linebreak Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson \linebreak Other useful references: \linebreak [3] Bayesian Deep Learning and a Probabilistic Perspective of Generalization \linebreak Andrew Gordon Wilson, Pavel Izmailov \linebreak [4] How Good is the Bayes Posterior in Deep Neural Networks Really? \linebreak Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin \linebreak [5] Bayesian Neural Network Priors Revisited \linebreak Vincent Fortuin, Adrià Garriga-Alonso, Florian Wenzel, Gunnar Rätsch, Richard Turner, Mark van der Wilk, Laurence Aitchison

No Attachment

Beginn: 16. September 2021 17:00

Ende: 16. September 2021 18:30