rlouf / mcx

Express & compile probabilistic programs for performant inference on CPU & GPU. Powered by JAX.
https://rlouf.github.io/mcx
Apache License 2.0
325 stars 17 forks source link

2020/05/27 #30

Closed rlouf closed 3 years ago

rlouf commented 4 years ago

With @ericmjl, @rlouf.

Current state

MCX manages its own graph by reading source code, translates it into a networkx graph. The graph is compiled into samplers and a logpdf. It can be inspected and modified dynamically.

HMC is working; Stan warmup is implemented but is extremely slow.

v0.1

Get HMC + warmup working. Warmup is extremely slow due to JIT-compilation and the way the inference is implemented -> reach out to JAX team to understand how it could be improved.

Empirical HMC should be quick to add and will act as a "turn key" algorithm. NUTS/dynamical HMC will be included in the next release; the important thing right now is to deliver a robust, user-friendly library.

  1. User experience. This includes no-nonsense integration with ArviZ (help from folks familiar with traces and ArviZ would be appreciated), better feedback when sampling is finished, more comprehensive and meaningful error messages during compilation. Users should be given clues as to what to do when an error is raised to avoid unnecessary back-and-forth with docs.
  2. Robustness This means more useful tests (without bloating the test suite), more static checks at compilation (syntax + maybe shapes?). I should feel comfortable using it in my current job. Examples should be part of the test suite to prevent important regressions.

HMC + eHMC + warmup + user-friendliness + robustness is the library's MVP.

Beyond v0.1

Next steps:

rlouf commented 3 years ago

@ericmjl I don't know if you still have time to work on this, especially automating doc builds. If not no worries, I just need to know if I have to include it in my own roadmap.

ericmjl commented 3 years ago

@rlouf I realized I'm quite a bit hamstrung here - there's a few moving parts on the readthedocs build that I tried a while ago but couldn't figure out how to debug. I realize too now that I'm getting more comfortable with mkdocs over sphinx - are you willing to try out mkdocs instead? If not, please include the docs build on your own roadmap, with my wife's due date coming, I am winding down all around.

rlouf commented 3 years ago

I have to admit that using markdown lowers the barrier to entry for people who want to contribute to the docs, which is a good thing. On the other hand I'm not willing to compromise on the Numpy-style docstrings.

So it boils down to which of the markdown plugin in sphinx and numpy docstrings in mkdocs is more comprehensive and stable. Is PyMC3 going to use mkdocs in the end? Either way I'd be interested to know what motivated the choice.

That being said I don't want this to get in the way of enjoying the next few months. There will still be many opportunities to contribute after that :)

ericmjl commented 3 years ago

Is PyMC3 going to use mkdocs in the end? Either way I'd be interested to know what motivated the choice.

PyMC3/4 still uses sphinx-style docs. In my case, I've moved docs on my projects over to mkdocs, with the exception of pyjanitor, which still is on Sphinx because of all of the interconnectivity to other projects in there. It is on my radar to move the pyjanitor docs to mkdocs though, and do a complete manual rewrite with a few of the devs on an afternoon hack in the winter.

On jax-unirep, which I built with my intern @ElArkk, we have API docs on a select subset of functions, and it's much easier to write because it's all in Markdown. (The combination of things we used are mkdocs + mkapidocs.)

Overall, mkdocs' authors refusal to add unnecessary features to the core is I think a net positive - there's a composable set of extensions that do 90% of what we would want. Granted though, Sphinx + RST is very, very powerful!

rlouf commented 3 years ago

I found a way to build docs on push to master and deploy to gh-pages. This will do the job for now. Closing this issue as there are no outstanding items. Thanks for your help!