Radev, S., Mertens, U., Voss, A., & Köthe, U. (in press). Towards end-to-end likelihood-free inference with convolutional neural networks. British Journal of Mathematical and Statistical Psychology.
Abstract: Complex simulator-based models with non-standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end-to-end approach for Approximate Bayesian Computation (ABC) based on fully convolutional neural networks (CNNs). The method enables users of ABC to simultaneously derive the posterior mean and variance of multi-dimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the CNN is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models, namely a multivariate normal distribution and a multiple regression scenario, for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the Leaky Competing Accumulator (LCA) model and compare our results to the current state-of-the art technique, the Probability Density Estimation (PDA). Results show that our method exhibits lower approximation error compared to other machine learning approaches to ABC. It also outperforms PDA in recovering the parameters of the LCA model.