Closed ifahim closed 2 years ago
any insight?
@pindinagesh, do you have any insight into this?
Hi @ifahim
Sorry for the delayed response, still trying to find some pointers on this issue. Will update as soon as i get the pointers.
KerasLayer
returns one layer that cannot be split into more layers. Instead, I'd recommend using the accompanying feature vector model (https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5) which does not contain the top layer. Thus, you can add a new layer on top of the feature vector and only make the new layer trainable:
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5",
trainable=False),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
m.build([None, 224, 224, 3]) # Batch input shape.
Please see https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5 for the full documentation and https://www.tensorflow.org/hub/tutorials/tf2_image_retraining for an example.
What happened?
I am doing finetuning using TF2.0 and using the keras models where I train the top 10 layers in the base model. I can easily set which layers to be trained and which not using the following code:
I want to do similar thing using the tfhub code but I don't know how to make those top layers exclusively trainable as the base_layer in TFHub model has not layers object.
How can I achieve the similar effect using the TFHub?
Relevant code
tensorflow_hub Version
0.12.0 (latest stable release)
TensorFlow Version
2.8 (latest stable release)
Other libraries
No response
Python Version
3.x
OS
Linux