Closed kechan closed 3 years ago
Additional note: the model "works" in the sense if i provide an input:
x = np.ones((10,1), dtype='float32') yhat = model(x)
tf 2.0 will eagerly output yhat tensor without error. The problem seems to be with serializing a model that has trainable vars declared inside a lambda layer.
Beside this issue, if someone spotted any API or coding mistake, please let me know. I may get away with this 'track_variable' business. However, i do think this missing attribute has to be a bug, although I don't know what it is for.
I tested with save_weights(...) and load_weights(...) and I don't encounter this error. (so i am good as far as immediate work is concerned).
I am using tf 2.0.0
Describe the current behavior
I have used a lambda layer in my model and declared 2 trainable vars inside the custom function. I saved and tried to load_model it back, and this seemed to be causing:
Describe the expected behavior
No such error. Backend should have attribute 'track_variable' since it is code written in core.py. Although my sample code may still have something wrong with it, but code base should at least be self-consistent.
Code to reproduce the issue