Closed bersbersbers closed 3 years ago
Note that contrary to #43, I did not manually change any dtype
here, and changing the dtype
of the input data did not help either.
Add penultimate_output = tf.cast(penultimate_output, tf.float32)
before
https://github.com/keisen/tf-keras-vis/blob/54c4def11b6cd83153fc6734fd5ae1cb2dde7802/tf_keras_vis/scorecam.py#L82-L86
diff -w ./scorecam.py /home/bers/tf-keras-vis/tf_keras_vis/scorecam.py
82d81
< penultimate_output = tf.cast(penultimate_output, tf.float32)
And here's a few potential changes for activation maximization:
diff -w ./__init__.py /home/bers/tf-keras-vis/tf_keras_vis/activation_maximization/__init__.py
87d86
< seed_inputs[j] = tf.cast(seed_inputs[j], tf.dtypes.float32)
90d88
< seed_inputs[j] = tf.cast(seed_inputs[j], self.model.layers[-2].compute_dtype)
108d105
< score_values = tf.cast(score_values, self.model.layers[-2].compute_dtype)
I'm sure there are smarter ways to infer the proper dtype
and maybe better places to cast (looking at the effort required to cast seed_inputs
back and forth in every iteration), but this is what's working for me.
@bersbersbers , Thank you so much for your detailed report! , and sorry for late reply. I'm going to fix them by the end of this month.
Thanks!
See additional test cases (still failing in 637f3dd) in https://github.com/keisen/tf-keras-vis/pull/39#issuecomment-782003025 and https://github.com/keisen/tf-keras-vis/pull/39#issuecomment-782006899
Hi, @bersbersbers . I've pushed the patch to PR / #39 to fix this issue . But, as I wrote to https://github.com/keisen/tf-keras-vis/issues/43#issuecomment-831045136 , It only support mixed-precision of tensorflow 2.4+. I would be happy if this change is helpful for you.
I'm so sorry for late reply. Thanks!
This is not fully working in 66132db.
# pip install tensorflow==2.4.1 git+https://github.com/keisen/tf-keras-vis@66132db3
import tensorflow as tf
from tf_keras_vis import scorecam
tf.keras.mixed_precision.set_global_policy("mixed_float16")
base_model = tf.keras.applications.MobileNet(
include_top=False,
input_shape=(32, 32, 3),
weights=None,
)
layer = base_model.output
layer = tf.keras.layers.Flatten(name="flatten")(layer)
layer = tf.keras.layers.Dense(2, dtype=tf.float32)(layer)
model = tf.keras.models.Model(inputs=base_model.input, outputs=layer)
data = tf.zeros(model.input.shape[1:])
loss = lambda output: sum(output)
scorecam.ScoreCAM(model)(loss, data)
print("Done.")
array type dtype('float16') not supported
Hi @bersbersbers . Thank you so much for pointing it out. I've fixed it. Thanks!
This example fails when interpolating data deep down in
scipy
: