Open eb8680 opened 6 years ago
great idea. (related: make importance parallel.)
this is possible only when the underlying computation is batchable. it might be sometimes hard to tell if this is so. is it possible to dynamically detect a failure of batching and bail out to the serial method?
Search
was removed in #1044 . As I understand, Search
would be reimplemented as part of this issue.
@eb8680 Does this issue also cover paralleization / batch-support for SVI
? I'd like to implement batched inference to expose our EM solvers (BP + Newton).
Does this issue also cover paralleization / batch-support for SVI?
What did you have in mind?
@eb8680 What did you have in mind?
I've written some details in #1213
@eb8680 is this still on track for our 0.3 release with PyTorch 1.0?
is this still on track for our 0.3 release with PyTorch 1.0?
We don't have an immediate use case, so probably not
Along the lines of #791, we should allow global independence dimensions in
Marginal
andTracePosterior
.