zenna / Omega.jl

Causal, Higher-Order, Probabilistic Programming
MIT License
162 stars 17 forks source link

Question about scope of Omega.jl #75

Closed outlace closed 4 years ago

outlace commented 5 years ago

I'm trying to figure out how general Omega is and how it compares to something like Turing.jl or Pyro in python.

Can I make an arbitrarily complex model using ordinary Julia functions? Can I implement my own distributions?

zenna commented 5 years ago

Yes and yes

There are a few ways which distinguish Omega from all other PPLS. Largely they are all about increasing expressiveness. There are lots of things you can't say or can't say easily in universal PPLs:

Implementation wise Omega takes a different approach to Pyro and Turing. Inference in Omega is based on manipulating the random number generator. So basically if you can write down the generative model in normal Julia you can use it as an Omega model.

In the next few weeks I'll be putting out two papers and maybe a blog post. I haven't been making too much noise about Omega yet (despite the Julia con talk) because it has very much been research code, but now it is becoming more mature.

So in short, if you can express your problem in Turing then do so. If you can't, Omega might help.

outlace commented 5 years ago

Thanks for the reply!

In the first line you seem to suggest that Pyro/Turing/etc are universal PPLs and hence that Omega is not, but later on say Omega can do more than Turing. In what sense is Omega not a universal PPL (or am I misinterpreting)? It cannot handle models with dynamic control flow and recursion?

Also, the method of manipulating the RNG seems novel. Is there an existing body of literature on this method or is this technique described in your upcoming papers?

And would you be able to create probabilistic neural networks, say in Flux.jl (and hence benefit from GPU speedup), and use Omega for inference over the parameters of the network?

zenna commented 5 years ago

Sorry I miscommunicated. Omega is a universal PPL in the same sense as Turing and Pyro. My point was that "Universal" can be misleading because there are still things that are very difficult (and actually impossible) to express, even in Universal Languages. Omega tries to make those things easier/possible to express.

re the random number generator, no not that I am aware of. This is the solution I came up with to keep Omega flexible, performant, and not rely on lots of metaprogramming. It's the reason Omega doesn't need any macros. We will write a paper on that part but not soon, but I'll update the documentation soon to explain.

Re neural network yes. But it would be a Bayesian neural network. I've done this, and trained it using Hamiltonian Monte Carlo. If you just need a standard network, use Flux.

outlace commented 5 years ago

Thanks for clarifying. Omega seems like a fantastic project. Looking forward to seeing your upcoming papers and blog post.

datnamer commented 5 years ago

This is very cool, @zenna . Are there plans for approximate inference such as ADVI and friends?

Also what are your thoughts on https://github.com/probcomp/Gen ?

zenna commented 5 years ago

I know Marco who is developing Gen. It seems cool. My understanding is that their emphasis is on (i) speed, and (ii) programmable inference. I've focused on speed, too, but am not going the static compilation route, so ultimately they will win out there, but I'm not sure by how much. We're not focusing on programmable inference. You can easily add a new inference algorithm to Omega, but we don't really expect users to (if there ever are any users!). I am interested in radically different ways to do inference though, and could talk at length about that.

We've talked about ADVI but haven't really got around to thinking about it deeply. If that's your interest, feel free to contribute!