Closed iliaschalkidis closed 5 years ago
I too have the same issue
Found a solution use from keras_contrib.utils import save_load_utils Save your model using save_load_utils.save_all_weights(model,filename)
Easily load it back, by just specifying the architecture how the model was built and load all the weights back by
save_load_utils.load_all_weights(model,filename)
I did it and it worked fine for me.
Thanks a lot @ParthShah412 It works fine for me also! A bit tricky and sloppy though... I need to add an if statement in my loader() to discriminate the loading between other models from 'CRF'-extended ones :D
I use save_load_utils.load_all_weights(model,filename) to load the model and it works. However, the model.predict() function is quite slow, it takes around 400ms to predict a single input. How to speed up the prediction?
thx guys. well, saving function was okay but once i try to load the model by using load_all_weights(model,filename), then, the error below throws: in load_all_weights topology.load_weights_from_hdf5_group(f['model_weights'], model.layers) AttributeError: 'NoneType' object has no attribute 'layers any idea how to fix?
@yzho0907 by your error message it seems that you are setting model = None
and then invoking load_all_weights(...)
, although you first need to reconstruct your model instead.
Guys, the only way that I've found to make this solution to work was using the following snippet, where build_bidirectional_model
builds the model on examples/conll2000_chunking_crf.py. Do you have any ideas on solutions or possible causes for that?
loaded_model = build_bidirectional_model(vocab_size, EMBEDDING_DIM,
LSTM_OUTPUT_SIZE, amount_classes,
compile=True)
save_load_utils.load_all_weights(loaded_model, STORED_MODEL_FILENAME,
include_optimizer=False)
loaded_model.evaluate(...)
If I set include_optimzer=True
I receive the following error message:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-80-b0de98ab256e> in <module>()
2 100, amount_classes, True)
3
----> 4 saver.load_all_weights(loaded_model, TEST_FILE, True)
5
6 # model2.compile('nadam', loss=crf.loss_function, metrics=[crf.accuracy])
~/miniconda3/envs/bicrf/lib/python3.6/site-packages/keras_contrib/utils/save_load_utils.py in load_all_weights(model, filepath, include_optimizer)
106 optimizer_weight_values = [optimizer_weights_group[n] for n in
107 optimizer_weight_names]
--> 108 model.optimizer.set_weights(optimizer_weight_values)
~/miniconda3/envs/bicrf/lib/python3.6/site-packages/keras/optimizers.py in set_weights(self, weights)
111 str(len(weights)) +
112 ') does not match the number of weights ' +
--> 113 'of the optimizer (' + str(len(params)) + ')')
114 weight_value_tuples = []
115 param_values = K.batch_get_value(params)
ValueError: Length of the specified weight list (37) does not match the number of weights of the optimizer (0)
How to use this using checkpoint in keras?
Same error here @lzfelix
I can't even use save_load_utils. It's getting an import error
from keras.engine import saving
ImportError: cannot import name 'saving'
For reference, the solution posted on #129 seems to work, although #272 is addressing this problem.
As an update, I managed to fix the saving import error by just updating keras (and modifying something in models.py). Now save_load_utils seems to work properly (.evaluate() gives me the same score on the same test data before and after loading...)
@mary-octavia did u mean updating keras to the latest version? if not, which version? thx
@mary-octavia same problem with keras '2.2.2',tf '1.10.1';
Following worked for me.
Save your model by using
model.save(filename)
or
model.save_weights(filename)
And you can load it back by model.load_weights(filename)
after specifying the model architecture
@csJd really? Did that work for you using crf from keras_contrib.layers?
@csJd really? Did that work for you using crf from keras_contrib.layers?
yes, keras.models.load_model(filename)
not work, but model.load_weights(filename)
worked for me
If you just care using them for predictions (production) and not retraining them, the default dump() and load() functionalities are both working just fine for models including CRF layers based on the latest updates in the code using Keras 2.2.0 with just a naive "hack".
Example:
def fake_loss(y_true, y_pred):
return 0
model.save(filename)
model = load_model(filename, custom_objects={'CRF': CRF, 'loss': fake_loss})
PR #318 should have fixed this problem. Please refer to the CRF new docs and example in the test folder.
Closing this issue as it seems resolved. Thanks @lzfelix .
Thanks a lot @ParthShah412 It works fine for me also! A bit tricky and sloppy though... I need to add an if statement in my loader() to discriminate the loading between other models from 'CRF'-extended ones :D
can you please share me how you solved the issue i am also facing thee same.
I solved the unknown crf layer using:- from keras_contrib.layers.crf import CRF, crf_loss, crf_viterbi_accuracy newmodel = load_model(model_name, custom_objects={"CRF": CRF, 'crf_loss': crf_loss, 'crf_viterbi_accuracy': crf_viterbi_accuracy}) It works good for me!!!
I set up a BILSTM-CRF model for sequence labelling, very similar to this example (https://github.com/farizrahman4u/keras-contrib/blob/master/examples/conll2000_chunking_crf.py).
The model was trained successfully and I also successfully called the predict_classes() function, as soon as I trained it and I had the object after training...
Then I tried to load it back, like it is mentioned in the wiki "A Common "Gotcha", importing the additional layer, before calling load_model() function :
The error message is:
So, I changed to:
The new error says that the loss is missing also:
I tried extending the custom_objects dict with ':loss':CRF.CRF.loss_function', but I still have the same error...
Any idea about that?