Closed lefnire closed 7 years ago
had a similar problem the other day - try googling for me I found this fix https://github.com/2014mchidamb/AdversarialChess/issues/4
or setup an old version of tensorflow 1? and try that with miniconda. https://gist.github.com/johndpope/187b0dd996d16152ace2f842d43e3990
Tried Tensorflow 1.0.0 (error & pip-freeze), will try per your AdversarialChess comments tomorrow. Thanks!
@johnpope I'm having trouble connecting the fix from your prior issue (AdversarialChess) to this situation conceptually, don't know what I'd change here. Maybe a keras/tf version change could be a quick fix, which versions are you using (per pip freeze?) Also question for @jaungiers
sorry - tensorflow changed their syntax at some point and I thought it was connected with this. there's a bunch of other tensorflow lstm examples that I've cloned. you maybe able to progress things by referencing their code.
Looks like an issue with threading - maybe args aren't being passed properly, or a race condition or something. When I remove the threading line and call fit_model_threaded()
directly, all's well! (python 3.5, TF 1.2, keras 2.0.6)
# t = threading.Thread(target=fit_model_threaded, args=[model, data_gen_train, steps_per_epoch, configs])
# t.start()
fit_model_threaded(model, data_gen_train, steps_per_epoch, configs)
If I get threading back in business I'll submit a PR
@lefnire were you ever able to get the threading issue resolved? i am running into a similar issue where i get the feed_dict
error when i run the training on a separate thread
alas, no. I haven't messed with this repo in a while, sorry!
@lefnire i am running all my model training on one thread but can't seem to use that model to predict on my main io loop thread. let me know if you find a solution!
not sure this helps - but found some threading code for another python project.
seems like there's a central queue to orchestrate things.
I happened to have found a solution to that, from avital's answer in https://github.com/fchollet/keras/issues/2397.
Right after loading or constructing your model, save the TensorFlow graph:
graph = tf.get_default_graph()
In the other thread (or perhaps in an asynchronous event handler), do:
global graph with graph.as_default(): (... do inference here ...)
Root cause:
The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function. (See - https://www.tensorflow.org/api_docs/python/tf/get_default_graph.)
Same happened Restart the kernel or clear the history or cache It worked
As far as my knowledge there are still a few bugs with keras , mainly in the load_model() funtion. Today i was succesfully able to solve 5-10 problems by restarting , maybe u should try
I was facing the same problem with flask , and tensorflow ,but i was able to solve it
just install cython
either
conda install cython
or
pip install cython
and also install
conda install botocore
maybe these bugs are arising AWS production servers
finally completely solved it . This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code then i had again loaded the model.
K.clear_session()
""" he error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider: """
from keras import backend as K
K.clear_session()
K.clear_session()
finally completely solved it . This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code then i had again loaded the model.
K.clear_session()
thanks, this solved the issue.
finally solved 👍
from tensorflow.keras import backend as K
instead of form keras import
K.clear_session() //before predicting
Fresh clone, data/bitcoin.csv unzipped, Keras(2.0.6) TensorFlow(1.2.1) Python(3.6.2) (full pip freeze).
[Edit] Also tried on Python 2.7, same error. (full pip freeze)
Full error:
I realize you're likely not keen on supporting a blog-post's code-demo, but just in case someone has top-of-the-dome.