Bayesian Inference in the Machine Learning Era
Bayesian statistics lie at the heart of inference for the vast majority of tasks in Astrophysics and Cosmology. Over the last 20 years, with the advancement of both computational resources and statistical tools, this field has experienced an abundance of new methods, varying in speed, accuracy and computational expense. These can be classified into more traditional methods and machine learning accelerated / enabled. (Examples for the former include nested sampling, Hamilton Monte Carlo or approximate bayesian computation, for the latter emulator accelerated inference or neural posterior estimation) As all these methods approach the problem of inference in a slightly different way, an exhaustive comparison of the performance of such methods in varying regimes (e.g. high dimensional data, computationally expensive models, unknown likelihoods, sparse data…) would be very interesting. I propose to run mock inference tasks with varying inference methods to identify advantages and disadvantages of the methods.