Closed sayakpaul closed 3 years ago
Thanks for your question. Because the input examples are converted to dictionaries in preprocess_image()
, it works best to name the Keras input tensor as well:
inputs = Input(shape=(224, 224, 3), name="image")
@csferng thanks for the help. It did the trick.
Now when I try to test the robustness for comparison purposes, the perturb_on_batch()
function is not able to interpret label
key from the feature dictionary. Is there a way to bypass it?
Here's the updated Colab Notebook.
@sayakpaul, perturb_on_batch()
expects the same input format as call()
, which is a dictionary containing both features and labels. The output of perturb_on_batch()
also contains the same label features.
I didn't see any error in your colab notebook around perturb_on_batch()
. Could you explain more what was wrong or what you'd like to achieve?
@csferng it came out as a warning actually. When I ran it the second time it went away.
If you see closely there is not by perturbation and accuracy for both the models for the perturbed batch is zero. I wanted to know why and how I could counter it. My belief is that there must have been something wrong in my code.
@csferng any updates?
I got reasonable results by running your colab:
base model accuracy: 0.250000
adv-regularized model accuracy: 0.515625
Maybe the zero accuracy issue was caused by some cached values in the runtime. Could you restart the runtime and see if the issue persists?
Regarding the warnings, some like Cannot perturb features ['label']
is normal during perturb_on_batch()
. The label
feature is in integer type, so it cannot be perturbed. But since we actually don't want to perturb the label
feature, this behavior is okay. 91a4e3b suppresses this kind of warnings and will be included in the next release.
Thanks, @csferng. The issue is solved now.
Hi.
I am currently on TensorFlow 2.3 and I am using the latest version of
nsl
. I am trying to train an adversarially robust flower classifier with the flowers dataset. I am preparing the data in the following way -My base model is constructured like so -
The adversarial-regularized model is prepared in the following way -
When I start training I run into -
Here's my Colab Notebook. Am I missing out on something?
Cc: @csferng