for trying out variational inference. At some point I noticed that I don't get good convergence, so I tried to just sample a histogram like this:
with tf.Session() as sess:
plt.figure()
samps = model_omega(b,h,L).distribution.sample(100000).eval()
plt.hist(samps,normed = True,bins=75)
And then I noticed that I get different results every time I sample a histogram for the same input distributions.
I compared it with letting the samples from the input distributions run through my omega_fn():
with tf.Session() as sess:
plt.figure()
for i in range(10):
samps = model_omega(b,h,L).distribution.sample(100000).eval()
plt.hist(samps,normed = True,bins=75)
plt.figure()
for i in range(10):
samps = omega_fn(b.distribution.sample(1000),h.distribution.sample(1000),L.distribution.sample(1000)).eval()
plt.hist(samps,normed = True,bins=75)
And surely enough, this produces a consistant result. Letting the scale parameter in model_omega go to zero also doesn't change anything for the first case. I attached the two plots from the code below.
I wrote an Edward Model like this:
for trying out variational inference. At some point I noticed that I don't get good convergence, so I tried to just sample a histogram like this:
And then I noticed that I get different results every time I sample a histogram for the same input distributions.
I compared it with letting the samples from the input distributions run through my omega_fn():
And surely enough, this produces a consistant result. Letting the scale parameter in model_omega go to zero also doesn't change anything for the first case. I attached the two plots from the code below.