Closed ifahim closed 2 years ago
Please try passing dtype=tf.float32
to hub.KerasLayer()
, i.e.
import tensorflow as tf
import tensorflow_hub as hub
IMAGE_SIZE = (224,224)
class_names = ['cat','dog']
tf.keras.mixed_precision.set_global_policy('mixed_float16')
model_handle = "https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5"
do_fine_tuning = False
print("Building model with", model_handle)
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
hub.KerasLayer(model_handle, trainable=do_fine_tuning, dtype=tf.float32),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(len(class_names),
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()
This should fix the issue.
@ifahim,
Can you kindly take a look on the above comment and confirm if it resolves your question? Thank you!
@ifahim
I tried to reproduce the same error, I'm unable to get the same error after passing dtype=tf.float32
to hub.KerasLayer()
, for your reference I've added Gist file here
Could you please confirm if this issue is resolved for you ? Please feel free to close the issue if it is resolved ?
Thank you!
Closing due to inactivity.
What happened?
I am trying to load a model from Tensorflowhub using example code. It works perfect with the FP32. As soon as I add the
tf.keras.mixed_precision.set_global_policy('mixed_float16')
to enable mixed float, it raises an error. Looks like the dimension issue but then it works perfect with FP32Relevant code
Relevant log output
tensorflow_hub Version
0.12.0 (latest stable release)
TensorFlow Version
other (please specify)
Other libraries
tensorflow-gpu==2.9.1
Python Version
3.x
OS
Linux