thu-ml / zhusuan

A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow
http://zhusuan.readthedocs.io
MIT License
2.2k stars 419 forks source link

Example request #97

Open romanklis opened 6 years ago

romanklis commented 6 years ago

Dear Creators,

I deeply think this is one of the most powerful frameworks for creating graphical models and parameters learning Ive seen so far. Outstanding job.

I kindly wanted to request a few more simple easy to follow BN focused examples, such as classical sprinkler example http://web.eecs.utk.edu/~leparker/Courses/CS594-fall09/Lectures/12-Chapter14b-Oct22.pdf. Potentially with a hidden state twist - would be great in practice.

Best regards

thjashin commented 6 years ago

Thanks for the kind words! Could you be more specific on the example you request? From my understanding, it's a 4-node BN with all discrete variables. And exact inference is available after some tensor computation. What would you like the example to illustrate?

romanklis commented 6 years ago

I though about a following use cases which would be extremely useful in practice:

(1) Utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning once we are given observations from all nodes expect one, as far as I understand it is latent variable.

Sample 1: [0, 1, 2, NaN, 0, 2] Sample 2: [0, 0, 1, NaN, 0, 2] Sample 2: [0, 0, 1, NaN, 0, 1] and so on...

Once we have the network trained, i.e., we have complete joint probability distribution function, we would like to execute a range of inferences given, again, some incomplete observations.

(2) Again utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning once we are given potentially incomplete observations.

Sample 1: [NaN, 1, 2, 3, 0, 2] Sample 2: [0, 0, 1, NaN, 0, NaN] Sample 2: [0, 0, 1, NaN, NaN, 1] and so on...

Once we have the network trained, i.e., we have complete joint probability distribution function, we would like to execute a range of inferences given, again, some incomplete observations.

(3) Utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning and some expert knowledge (prior adjustment for some particular node) and once we are given observations from all but this expert knowledge nodes expect one, as far as I understand it is latent variable.

I believe it would be very highly informative for non purely technical/academic audience.

Best regards Roman

thjashin commented 6 years ago

Thanks for the details. I guess an interactive tutorial in jupyter notebook would be suitable for this and indeed we need one. Because now we are refactoring the modeling primitives for the 0.4 version. The plan for this tutorial shall come after that.