yeah looks like nice externals, but yet again, all the inference is with MCMC :s
MCMC is the most general approach, but also the most inefficient, so its got limited appeal in crunching live numbers coming out a physical system. Some problems are intractable and its the only way, but really you want to limit application of MCMC as little as possible, so if you use church you now have an interoperability problem with binding (LISP) to some ugly but efficient FORTRAN system or some such.
Exactly why I think think this initiative will be highly welcome!
So you're probably aware of this, but I think it might be interesting to try and articulate why I fear extending fully-Bayesian inference algorithms beyond MCMC could be a challenge, from experience with ML-focused Bayesian models and larger datasets:
Generally to make inference for these kinds of models fast, you have to make it approximate.
MCMC has the nice property that, while it's an approximate method, it becomes exact in the limit of infinite samples. Meaning that when applying it in a general setting, the compiler doesn't have to make hard and final decisions about where and how to introduce approximations -- it produces MCMC code and you decide how long you can be bothered to wait for it to converge and how accurate you want the answer to be, based on various diagnostics.
Most other non-MCMC-based approximate inference methods (MAP, EM, Variational Bayes, EP, various hybrids of these and MCMC...) don't converge to the exact answer, they converge to an approximate answer which remains inexact no matter how long you run the algorithm for.
Different approximations have different strengths and weaknesses, and the best choice may depend on the model, the data, what you actually want to do with the posterior (a mode, a mean, an expected loss, predictive means, extreme quantiles?), and what frequentist properties you want for the results of the inference, especially given that this is no longer pure Bayesian inference but some messy approximation to it. Often the only practical way to decide will be to try a bunch of different things and see which does best on the application (or best predicts held-out data), even armed with full human intuition. A compiler doesn't really have a chance here.
In short, it's not going to be easily to automate fully because it's not something you can decide on formal grounds alone. You have to decide how and where you're willing to approximate, and you have to understand the approximations used and check the resulting approximated posterior against data in order to know whether to trust the results and how to interpret them.
MCMC is the most general approach, but also the most inefficient, so its got limited appeal in crunching live numbers coming out a physical system. Some problems are intractable and its the only way, but really you want to limit application of MCMC as little as possible, so if you use church you now have an interoperability problem with binding (LISP) to some ugly but efficient FORTRAN system or some such.
Exactly why I think think this initiative will be highly welcome!