Posterior normalizing flows
Normalizing flows allow us to construct complex probability distributions by transforming simpler distributions via a change of variables. If we model the change-of-variables transformation using an invertible neural network with an analytically tractable Jacobian, we can train that neural network to perform maximum likelihood density estimation.
Such maximum likelihood density estimation is likely to overfit, particularly if the number of observations is small. Traditional Bayesian approaches offer the prospect of capturing posterior uncertainty, but come at high computational cost and do not provide an intuitive way of incorporating prior information. A nonparametric learning approach allows us to combine observed data with priors on the space of observations. We present a scalable approximate inference algorithm for posterior normalizing flows, and show that the resulting distributions can yield improved generalization and uncertainty quantification.