Open Lotte1990 opened 4 years ago
Hi @Lotte1990 , sorry for late response. Want to check if this still bugs you, before taking a look into this.
@teijeong @Xhark Yes, I can confirm this is still an issue using tf-nightly (2.6.0.dev20210418) and tensorflow-model-optimization 0.5.0. Please look into this issue.
@teijeong @Xhark Any updates on this?
I am still having that issue. Is there any progress on it? Did anyone maybe find some workaround for the time being?
This issue is still bothering me. Please look into this.
@Lotte1990 same issue for me. did you find any workaround yet?
@mrj-taffy Unfortunately not. Let's hope it will be fixed soon. Perhaps @Xhark could give an update on the situation...
quant_model.load_weights(model_path)
@WillLiGitHub What do you mean? Could you explain a bit more?
Describe the bug
Saving and subsequently loading a quantized model results in the following error:
The error can be reproduced using this code (test.py): Please note that there is no error when setting
quantize_model = False
System information
TensorFlow version (installed from source or binary): 2.3.0 (Docker image)
TensorFlow Model Optimization version (installed from source or binary): 0.4.1 (Docker image)
Python version: 3.6.9 (Docker image)
Describe the expected behavior Code should not crash
Describe the current behavior Code crashes