Closed dumkar closed 7 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.
I am as of July 11, 2017, experiencing this bug. It first occurred when I was attempting to train a model, but it now arises when the workaround code above is executed: keras.backend.get_session().run(tf.global_variables_initializer())
Error message below. What else do you need?
Thanks.
ensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value dense_1/Variable
[[Node: dense_1/Variable/read = IdentityT=DT_FLOAT, _class=["loc:@dense_1/Variable"], _device="/job:localhost/replica:0/task:0/cpu:0"]]
Caused by op u'dense_1/Variable/read', defined at:
File "/Applications/WingIDE.app/Contents/Resources/bin/wingdb.py", line 978, in
I have the same problem. I have a custom layer which works fine in some models, but fails with this message (similar to above) in other models. Totally annoying.
Got the same error while trying to use tensorflow-gpu as backend in keras. Though it worked well using cpu before. How could this make a difference?
Facing the same issue while training Keras model with custom kernel initializers. Also happens if I add BatchNormalization in the model. I already tried tf.global_variables_initializer()
before fit
, but that did not help. Any suggestions or workarounds?
Also having the same problem - happens in the BatchNorm layer. Took one version of code, ran it on GPU:0, no problem. Copied the code, Ran it on GPU:1, changed a few of the hyperparameters (learning rate, # of epochs) and get a FailedPreconditionError. Very inconstant, but once it happens in one of my Jupyter Notebooks, it seems reproducible there. Using Keras 2.1.3 and TF 1.8
I have the same issue, any suggestions?
I initialize the variables with the following code, and its work for me:
K.set_session(tf.Session(graph=model.output.graph)) init = K.tf.global_variables_initializer() K.get_session().run(init)
where K is from 'from keras import backend as K'. tf is from 'import tensorflow as tf'. And 'model' is my keras model. I add this code after compile the model.
The only solution that worked for me when using notebook is:-
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
hist = model.fit_generator(
train_datagen, steps_per_epoch=STEPS, epochs=EPOCHS, verbose=1,
validation_data=(x_valid, y_valid),
callbacks = callbacks_list)
For me, I had to use local_variables_initilaizer() -- global_variables_initializer() wouldn't work.
sess = tf.Session()
sess.run(tf.local_variables_initializer())
same error in latest keras.
I fixed it with set different graph in session. if there are multi-models in the same project,use the tensorflow default graph to init a new session,and a definitely new graph for tensorflow model.
@novioleo could you please share the code snippet that fixed this? thanks
@novioleo could you please share the code snippet that fixed this? thanks
i'm not exactly sure~but you can have a reference:
# model from **keras** please use the default graph **always**.
# model from tensorflow need to use a totally new graph
default_graph = tf.get_default_graph()
with default_graph.as_default():
self.sess_1 = tf.Session(config=self.config)
K.set_session(self.sess_1)
with self.sess.as_default():
self.model = modellib.MaskRCNN(mode="inference", model_dir=self.log_dir, config=InferenceConfig())
self.model.load_weights(self.model_file, by_name=True)
with graph.as_default():
self.x_ = tf.placeholder(tf.float32, [None, self.img_size])
self.x_image = tf.reshape(self.x_, [-1, self.img_height, self.img_width, 3])
self.enhanced = resnet(self.x_image)
self.sess_2 = tf.Session(config=self.config)
with self.sess_2.as_default():
saver = tf.train.Saver()
saver.restore(self.sess_2, "./path/to/your/model")
i exacted my this from my code,there could be some errors,please fix it by yourself~
when you need the use the model to predict,justwith self.which_session_you_want_to_use:
.
i suggest you guys can make a model into a class
for a better management.
This solve my issue
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# do others task
I fixed it with set different graph in session. if there are multi-models in the same project,use the tensorflow default graph to init a new session,and a definitely new graph for tensorflow model.
My issue is also caused by using multi-models in one project (tensorflow model + Keras model). Thank @novioleo for the answer. My issue was solved by initializing the Keras model using a new session defined in the default graph
default_graph = tf.get_default_graph()
with default_graph.as_default():
self.sess_keras = tf.Session()
global model
model = Model() # keras model
and use this new session during prediction with the keras model:
with self.sess_keras.as_default():
test_logits = model.predict()
I am new to Keras and just installed it (with pip3) to use with TensorFlow (1.0.0). I am trying to follow the Keras+TensorFlow tutorial.
When running the code, it stops at
train_step.run(feed_dict={img: batch[0], labels: batch[1]})
and throws the error below. I figured out it is because variables are not initialized and fixed it by inserting (see #4623):
keras.backend.get_session().run(tf.global_variables_initializer())
I decided to post it here since I was wondering if this is a general issue with Keras (as this is a rather simple example) regarding the update to TensorFlow 1.0.0 or something specific to my setup?
The error: