blei-lab / edward

A probabilistic programming language in TensorFlow. Deep generative models, variational inference.
http://edwardlib.org
Other
4.83k stars 759 forks source link

You must feed a value for placeholder tensor when debug = True #636

Closed synikitin closed 7 years ago

synikitin commented 7 years ago

Hello,

I have been trying to experiment with amortized inference, but binding a placeholder to an observable variable keeps crashing. I have tensorflow 1.1.0 and edward 1.3.1. Here is an example code:

import tensorflow as tf
import edward as ed
import numpy as np

## data
n = 3
y = np.linspace(-1, 1, n)
y = y.reshape(3, 1)

## model
# generative
coef_gen = tf.Variable(tf.random_normal([1, 1]))
x_latent = ed.models.Normal(tf.zeros([n, 1]), tf.ones([n, 1]), name = "x_latent")
y_obs = ed.models.Normal(tf.multiply(coef_gen, x_latent), tf.ones(1), name = "y_obs")

# variational
coef_inf = tf.Variable(tf.random_normal([1, 1]))
y_ph = tf.placeholder(tf.float32, [n, 1])
x_inf = ed.models.Normal(tf.multiply(coef_inf, y_ph), tf.ones([n, 1]))

## inference
inference = ed.KLqp({x_latent : x_inf}, data = {y_obs : y_ph})
optimizer = tf.train.RMSPropOptimizer(0.01, epsilon = 1.0)
inference.initialize(optimizer = optimizer, n_iter = 500, n_samples = 5, debug = True)
init = tf.global_variables_initializer()
init.run()
inference.update({y_ph : y})

This can be avoided with y_ph = tf.placeholder_with_default(tf.cast(y, tf.float32), [n, 1]), but clearly defeats the purpose of feeding.

Thanks for your help!

meta-inf commented 7 years ago

Actually any code with inference.run(debug=True) crashes. (At least I tried the SGHMC Bayesian LR example and something of my own.)

meta-inf commented 7 years ago

The problem is we did not feed the data dict when running op_check. A fix is referred above.

However, I modified feed_dict in Inference.update as it did not correctly handle cases like data={rv: python_value}, but I don't know if it breaks other things. @dustinvtran could you look at it?

(I'm a bit lost in the codebase. e.g. Initially I modified the code to transform cases above to data={rv: tf.constant(python_value)}, then I found the binding to constant tensor is broken in e.g. MonteCarlo...)

dustinvtran commented 7 years ago

Ah, got it. That makes sense re:op_check with feed dict. It does not break other things. A PR is welcome.

then I found the binding to constant tensor is broken in e.g. MonteCarlo

Could you elaborate on this?

meta-inf commented 7 years ago

e.g. If you change data={x: x_data} to data={x: tf.constant(data)} here, it fails. This is because we should add (key, value) to feed_dict if value is essentially a constant tensor here. But I don't know how to test if a tensor is essentially constant...

dustinvtran commented 7 years ago

Fixed in #642. The other bug with Gibbs (https://github.com/blei-lab/edward/issues/636#issuecomment-301521641) is fixed in #657.