Closed trappmartin closed 4 years ago
This would also be very useful for variational inference:
I think we need a "Taking VI seriously" kind of issue, to discuss all the requirements of VI from the rest of Turing. Supporting SGD, this issue and the Bijectors issue are all things that can and should be resolved with enough hours behind them. @torfjelde when you feel that you have a sufficiently panoramic view of VI algorithms and how to support them fully in Turing, consider starting an issue discussing your thoughts.
Just giving my two cents here, but if VI should be included in Turing, which I think is really a great idea, then I think it should be based on F-divergence instead of only ELBO (A small subset of the former). This way Turing would move ahead of much of the competition in the UPPL space.
I think it would be interesting to have a way to plug in the divergence you like, e.g. KL or beta. But I don’t know how easy such a framework would be to realise.
I think we could potentially introduce two types of VarInfo
that peforms "storing log density for each ~ notation" and the current behaviour (aggreation) seperately. And for the first type, we could define vi.logp
to sum all the invidual log density values.
Closed via #997 .
We are currently not able to compute the log probability
p(x_i | theta)
for each observation in Turing. Instead, we always computesum_i log p(x_i, theta)
which makes a lot of sense from an inference point of view. However, adding this functionality would allow:I started discussing this with @mohamed82008 and we had a few ideas on how to approach this, all of them seem rather hacked in my opinion.
Here is an alternative proposal....
One of the issues (from what I see) is that we currently do not have a
VarName
for observe statements. However, we could easily extend the compiler by generating those allowing us to pass on aVarName
object to each observe statement.Further, we would
1) need to change the way we manipulate the
vi.logp
field as this could now be aFloat64
in case of aggregation or aVector{Float64}
in case of log probability values for each observation, or2) store the logp values for each observation inside the
VarInfo
, i.e. similar to the parameter values, and treatlogp
the way we do it now.The first option would require us to additionally write a tailored sampler that computes only the log pdf and not the log joint. This is easy but maybe unnecessary overhead and would require to re-evaluate the model for each iteration in case of model-selection.
If we go for option 2 (which is similar to what a user can do in Stan) we would store the aggregated log joint in logp and the log pdf in addition in the VarInfo. This additional storing of the logp values for each observation would be disabled by default and could be used by setting a kwarg. In contrast to option 1, this one would be memory intensive if we aim to compute the model evidence which could be prevented by re-evaluting the model for each iteration (similar to option 1).
I think option 2 might be the more convenient option.
(cc'ing @yebai @xukai92 @willtebbutt @ablaom )