pyro-ppl / pyro

Deep universal probabilistic programming with Python and PyTorch
http://pyro.ai
Apache License 2.0
8.57k stars 986 forks source link

Updating intro tutorials #1479

Closed eb8680 closed 5 years ago

eb8680 commented 6 years ago

The language intro tutorials are somewhat out of step with the way Pyro tends to be used, causing some confusion for users. In particular, we should probably just remove all mentions in Part 2 of importance sampling, marginal distributions, and execution traces.

robsalomone commented 6 years ago

Hi @eb8680, agree with this. It took me a while to get my head around things the way the tutorial is explained (and I'm familiar with VI and IS). I recommend a later (advanced) tutorial example on Importance Sampling using a guide. I'm happy to contribute a modified tutorial plus one on importance sampling (that has more background) if you'd like?

eb8680 commented 6 years ago

@robsalomone A dedicated tutorial on importance sampling would be awesome, we'd definitely welcome that!

As context for this issue, Part 2 of the intro tutorial is meant to introduce the idea of inference in a universal probabilistic programming language as gently as possible, along the lines of the first three chapters of Probabilistic Models of Cognition. However, when using Pyro in practice we tend to use variational inference and rarely construct marginal distributions explicitly, as is done in WebPPL, so the use of Importance and EmpiricalMarginal in the tutorial is a little misleading.

Accordingly, I'd planned to remove these references and refocus the tutorial on the idea of guide programs and of inference by optimization. In my experience, an introduction to Pyro along those lines has worked pretty well for people with less background in statistics or probabilistic machine learning, though it may be less appropriate for more knowledgeable users. What parts of the intro tutorials did you find unclear, and what sorts of changes would you like to see?

robsalomone commented 6 years ago

@eb8680 no worries, I'll do one up and submit when I get the time (was planning to do some more general tutorials on a blog).

Regarding the intro tutorial, I think the diversion to Importance Sampling was the main thing. However, I think if the redone version makes certain terms more clear and gives a "top down" overview at the start I think it will come across more clearly. Most of it did make sense to me in the end though. I'm happy to take a look at any new drafts and give feedback / ask stupid questions.

I also think a more general overview of lower level features (poutine) would be good, but I know you guys are also working on that.

jamestwebber commented 6 years ago

As someone who has been going through the tutorials while learning the framework, I would definitely appreciate this, and I'm willing to help out if needed (probably later this month).

One thing I brought up in a forum post is that the different tutorials use different idioms or coding architectures. I think it would be helpful to use a consistent style because it would be easier to compare two different models and see how they differ.

Personally I prefer the design of the VAE example, in which model and guide are methods of a object that subclasses nn.Module, because it leads to a self-contained reusable inference component. But if there's a more "pyro-ic" way to write models I'd love to know about it.

This might mean introducing some boilerplate up front rather than starting out more ad hoc and refactoring the result, but I think it would make the tutorials read more consistently and help new users get into the right habits.