I think pyro.contrib.oed has deviated from best practice for doing Monte Carlo estimation in pyro. Let's look at how I currently obtain multiple, independent samples from a model:
First
def lexpand(A, *dimensions):
"""Expand tensor, adding new dimensions on left."""
return A.expand(tuple(dimensions) + A.shape)
Then, in eig.py
# Take N samples of the model
expanded_design = lexpand(design, N) # N copies of the model
trace = poutine.trace(model).get_trace(expanded_design)
What's the point of this versus something like EmpiricalMarginal? This approach uses tensorization nicely and we can run the simulations in parallel: in practice it is much faster than running N simulations of the model in series, by creating multiple traces or something. Another appealing thing is that I can control the shape of the output tensor: e.g. if I want NM samples in a grid (e.g. to sum over one dimension, do something else on another) I just lexpand(design, N, M).
The problem: this is not pyro- I need some code inside my models that expands everything to match the dimensions of the design input. Is there a tensorized way to take independent samples of a model?
I think
pyro.contrib.oed
has deviated from best practice for doing Monte Carlo estimation in pyro. Let's look at how I currently obtain multiple, independent samples from a model:First
Then, in
eig.py
What's the point of this versus something like
EmpiricalMarginal
? This approach uses tensorization nicely and we can run the simulations in parallel: in practice it is much faster than runningN
simulations of the model in series, by creating multiple traces or something. Another appealing thing is that I can control the shape of the output tensor: e.g. if I wantNM
samples in a grid (e.g. to sum over one dimension, do something else on another) I justlexpand(design, N, M)
.The problem: this is not pyro- I need some code inside my models that expands everything to match the dimensions of the
design
input. Is there a tensorized way to take independent samples of a model?