model = predict.load_model('path\model.h5')cache.set('model', model)
When I try to retrieve it:
cache.get('model')
I get this error:
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://cf101767-7409-47ba-8f98-0921cc47a20a/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as '/job:localhost'.
Is it possible to load the model once and keep using it in the Django webapp?
In Django, if I set the loaded model in cache:
model = predict.load_model('path\model.h5')
cache.set('model', model)
When I try to retrieve it:
cache.get('model')
I get this error:
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://cf101767-7409-47ba-8f98-0921cc47a20a/variables/variables You may be trying to load on a different device from the computational device. Consider setting the
experimental_io_device
option intf.saved_model.LoadOptions
to the io_device such as '/job:localhost'.Is it possible to load the model once and keep using it in the Django webapp?