aesara-devs / aeppl

Tools for an Aesara-based PPL.
https://aeppl.readthedocs.io
MIT License
64 stars 21 forks source link

Add Frechet `RandomVariable` #212

Open bwengals opened 2 years ago

bwengals commented 2 years ago

Description of your problem or feature request

I'd like to request to add a Frechet random variable implementation, parameterized the same as Stan here. It's another name for the inverse Weibull, which is in scipy. The use case is penalized complexity (PC) priors for lengthscale parameters in Gaussian process models.

So I'm proposing this as an enhancement, and would be happy to make a PR for it. Would like to support it in aeppl too.

brandonwillard commented 2 years ago

We've converged on the notion of only implementing the core functionality and numpy.random.* implementations in this repository, so we might need to move this to AePPL or actually start working on a separate, larger Aesara SciPy project.

bwengals commented 2 years ago

Thanks for answering.

@ricardoV94 you're saying Aeppl will derive the logprob of the inverse Weibull from the logprob of the Weibull? Very cool!

@brandonwillard Whichever of those two choices sounds good to me. I just subclassed ScipyRandomVariable, so it shouldn't be too tricky to add, or I can file it away until there's an Aesara Scipy thing.

brandonwillard commented 2 years ago

But I also suggested there might be no good reason to implement a specific Op for the Frechet, unless the scipy RNG is doing something fancy that is not achieved by just taking draws from a Weibull and inverting them.

Yes, that will always be the preferred route in cases like this, but, as you mention, we might not get complete parity with NumPy—at least in terms of performance and/or seeding and sample results. Such parity concerns are one of the reasons I don't want to go too far in our scipy.stats coverage, because, at some point along that path, we're simply rolling our own implementations and that requires considerably more testing.

rlouf commented 2 years ago

When it comes to probability distribution my vote goes to AePPL.

Also at.inv(weibull) is nice but we should still have a way to give those a type so we can reason about them in AeMCMC.

rlouf commented 2 years ago

Would you say we should drop LogNormal, since it's just an exponentiated Normal? And Binomial? And InvGamma?

The choice of what we name and what we leave as function of another RV is somewhat arbitrary, and I just wanted to point out that we don't have a criterion to decide yet.

We can either decide that whatever statisticians named they named for a reason, and get the largest set of names function possible. Or decide to only keep a subset of fundamental distributions from which everything else can be derived. These two propositions are not incompatible, as we can always give a name to e.g. at.inv(weibull) to keep the smallest subset of logprobs/forward sampling functions.

rlouf commented 2 years ago

I think the criteria will ultimately be: Does the specialized Op produce significantly faster/reliable draws (or logp graphs) than the derived one?

I agree for the draws and the logprob part. Is there a way we could define an Op such that at.inv(weibull) is used when conditioning on this RV? (that's vague, but I hope you get the idea). That's something that could be done in AePPL, where we should have a wrapper for RVs implemented in Aesara (and add other RVs) anyway.

rlouf commented 2 years ago

Implement a Frechet Op, say in AePPL, so that the following graph:

x_rv = aeppl.random.Frechet(1, 1, 1)

is equivalent to:

x_rv = at.inv(at.random.weibull(1, 1))

as far as sampling and logprob are concerned. An OpFromGraph of some sort, I suppose.

brandonwillard commented 1 year ago

Implement a Frechet Op, say in AePPL, so that the following graph:

x_rv = aeppl.random.Frechet(1, 1, 1)

is equivalent to:

x_rv = at.inv(at.random.weibull(1, 1))

as far as sampling and logprob are concerned. An OpFromGraph of some sort, I suppose.

This can just be a helper function now that log-densities can be derived from at.inv.

rlouf commented 1 year ago

So will we match at.inv(weibull) directly in our rewrites? And are we ok not having a FrechetRV object we can directly manipulate and represent in the graph? Then, what about LogNormalRV for instance?

Or should we have an RVFromGraph equivalent of OpFromGraph for these derived distributions?

brandonwillard commented 1 year ago

So will we match at.inv(weibull) directly in our rewrites? And are we ok not having a FrechetRV object we can directly manipulate and represent in the graph? Then, what about LogNormalRV for instance?

We should be able to identify at.inv(weibull) sub-graphs quite easily, especially after they're rendered measurable via the rewrites. I would like to do the same for LogNormalRV, too, but, since that has an explicit NumPy implementation, it's on the fence (e.g. because we want to reproduce the samples from NumPy exactly and consistently).

We can always canonicalize/normalize these kinds of non-"atomic" forms for cases like LogNormalRV, but it's a little harder to justify doing the same for other cases like this one.

Or should we have an RVFromGraph equivalent of OpFromGraph for these derived distributions?

We could always do that, but I'm included to reserve uses of OpFromGraphs until we've ironed out some of the usage (e.g. shared variable handling) and efficiency issues (e.g. merging, inner-graph cloning) surrounding it.