apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.76k stars 6.8k forks source link

Probability Distributions Support #12932

Open thomelane opened 5 years ago

thomelane commented 5 years ago

Probability Distributions Support

Would be great to have out-of-the-box support for distributions, similar to functionality provided by TensorFlow Probability and PyTorch Distributions. My current use case is for Reinforcement Learning algorithms that learn stochastic action policies (i.e. learn parameters of a distribution from which actions are sampled), and I update these parameters using the likelihood.

MXNet would ideally have methods on each type of distribution for calculating:

And would support a variety of distributions including:

MXFusion is a related project but doesn't have the functionality mentioned above. And it would be ideal to have this as a submodule of the MXNet package.

thomelane commented 5 years ago

@mxnet-label-bot [Feature Request, Gluon, Operators]

anirudhacharya commented 5 years ago

Some of them are here - https://mxnet.incubator.apache.org/api/python/ndarray/random.html?highlight=mxnet.ndarray.random#random-distribution-generator

KL Loss - https://mxnet.incubator.apache.org/api/python/gluon/loss.html?highlight=kl#mxnet.gluon.loss.KLDivLoss

But yes we do need a probability submodule within mxnet

thomelane commented 5 years ago

Cheers @anirudhacharya, but the functionality I'm referring to is not covered by those references.

I'm talking about functionality beyond just sampling from distributions: calculations of probability density, log probability of a data sample given the distribution, entropy of distribution, etc. And as far as I'm aware the KL Loss in MXNet only works with samples, rather than the theoretical KL Divergence between distributions, so this is also insufficient for certain use cases.

You can't back propagate gradients through samples, so that's why it's important to have such formulas (e.g. log probability) implemented.

I can see a single case of probability being returned by mxnet.ndarray.random.multinomial but this is only for the sampled data point, and not calculated for an arbitrary data point which is required.

eric-haibin-lin commented 5 years ago

+1 on this feature

asmushetzel commented 5 years ago

We have also the need for this as part of our project. I have a local version for computing PDF and LOG_PDF including forward/backward pass (aka gradients for all parameters and samples) for the following distributions: uniform/normal/exponential/gamma/poisson/neg-binomial/Dirichlet. All coded as C++ operators and working on CPU and GPU. But it would take some more effort to make it clean enough to commit them. Have to see when I find the time. Naturally we could extend this for supporting CDF etc. Regarding more complex distributions like Multivariate-Gaussians, all necessary basic functionality exists already as part of the linalg-namespace in MXNet. I have plugged it together in python with a couple of lines. In addition build quite a bit around this that allows building more complex stuff (Gaussian Mixtures etc). We are using all that stuff in a specific project and will likely not have the amount of time in near future to polish this all on our own up to the point where we can back contribute. But if other interesting parties are willing to join the effort, we can collaborate.

anirudhacharya commented 5 years ago

FYI - https://github.com/amzn/MXFusion

eric-haibin-lin commented 5 years ago

@asmushetzel these are awesome stuff. Really look forward to see the contribution back in the future. Do you have some distribution like https://www.tensorflow.org/api_docs/python/tf/initializers/truncated_normal ?

asmushetzel commented 5 years ago

We are talking with MXFusion people. They don't have the PDFs mentioned in here coded and are not planning to do so. Concerning truncated_normal above, I think this request is primarily about a sampler (though we should provide PDF/PMF for all samplers that we support in MXNet anyway). Building such a sampler should be small work when put into the framework of the already existing ones (normal, uniform, gamma, etc). I will see whether I can get some resources.

asmushetzel commented 5 years ago

PR #14579 will bring in log-pdf/pdf of almost all distributions mentioned above.

asmushetzel commented 5 years ago

For technical reasons, the PR #14579 has been moved to a new PR #14617

yulinliu101 commented 3 years ago

it seems that the solution from PR #14617 is not sufficient to implement fully functional Gaussian policies in continuous action RL tasks.

However, in MxNet 2.0 Alpha we have probabilistic support similar to Tensorflow and PyTorch distribution modules. It would be great if we can merge these implementations from 2.0 to 1.x since there is still a large group of users consuming MxNet 1.x.