tensorflow / neural-structured-learning

Training neural models with structured signals.
https://www.tensorflow.org/neural_structured_learning
Apache License 2.0
980 stars 189 forks source link

Fix the error when save adv model and use modelcheckpoint callback #114

Closed wangbingnan136 closed 2 years ago

review-notebook-app[bot] commented 2 years ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

google-cla[bot] commented 2 years ago

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

For more information, open the CLA check for this pull request.

csferng commented 2 years ago

@wangbingnan136 , could you take a look and sign the Contributor License Agreement in the above comment?

wangbingnan136 commented 2 years ago

@wangbingnan136,你能看一下并在上面的评论中签署贡献者许可协议吗?

image It looks like I have already signed

wangbingnan136 commented 2 years ago

I think the checks were done,sorry this is my first time make a contribution to tensorflow~

wangbingnan136 commented 2 years ago

ok,I got it!

wangbingnan136 commented 2 years ago

It is done

csferng commented 2 years ago

Thanks for the fix! Let me polish the example notebook a bit before merging the PR.

wangbingnan136 commented 2 years ago

Okk

wangbingnan136 commented 2 years ago

I found that when we load the model,the "fit" and "predict" methods works as expected.But the "evalaute" has some bugs.I will fix this problem when I am not busy.

csferng commented 2 years ago

Sorry for the delay. I made some updates on the notebook and will proceed to merge this PR.

Regarding the bug on evaluate, what I encountered is that the accuracy number was off for the loaded model (0.09 vs. 0.98). The root cause turns out to be that metrics=['acc'] is interpreted differently. (Keras has some internal heuristic to distinguish what "accuracy" means to a model.) What we expect is tf.keras.metrics.SparseCategoricalAccuracy, which is the case for the adv_model. But the loaded model somehow got tf.keras.metrics.CategoricalAccuracy, so the evaluation result became different. To workaround this issue, I changed metrics=['acc'] to metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], which then produced consistent result before and after saving/loading.

wangbingnan136 commented 2 years ago

Sorry for the delay. I made some updates on the notebook and will proceed to merge this PR.

Regarding the bug on evaluate, what I encountered is that the accuracy number was off for the loaded model (0.09 vs. 0.98). The root cause turns out to be that metrics=['acc'] is interpreted differently. (Keras has some internal heuristic to distinguish what "accuracy" means to a model.) What we expect is tf.keras.metrics.SparseCategoricalAccuracy, which is the case for the adv_model. But the loaded model somehow got tf.keras.metrics.CategoricalAccuracy, so the evaluation result became different. To workaround this issue, I changed metrics=['acc'] to metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], which then produced consistent result before and after saving/loading.

I see,great job!