Closed Jogima-cyber closed 3 years ago
If you load model by:
model = tf.keras.models.load_model('../models/efficientnetv2-s-21k.h5')
The input shape is fixed at 224, 224, 3
. If you need a new resolution, can set input shape as dynamic:
# Define model and load weight
import efficientnet_v2
model = efficientnet_v2.EfficientNetV2(model_type="b0", input_shape=(None, None, 3), survivals=None, dropout=0.2, classes=1000, classifier_activation=None)
model.load_weights('../models/efficientnetv2-b0-imagenet.h5')
model(np.ones([1, 224, 224, 3])).shape
# TensorShape([1, 1000])
model(np.ones([1, 384, 384, 3])).shape
# TensorShape([1, 1000])
Just updated this in readme.
It's working thank you very much! Just a little thing: when loading the weights in this configuration, tf expects the model to have around 21000 classes, but I've found a workaround: define the model with around 21000 classes, load the weights, then put a new head onto it with the right number of classes.
That can be done easier, by using by_name=True, skip_mismatch=True
, and it will report those layers mismatched:
import efficientnet_v2
model = efficientnet_v2.EfficientNetV2L(input_shape=(None, None, 3), survivals=None, dropout=1e-6, classes=1000)
model.load_weights('../models/efficientnetv2-l-21k.h5', by_name=True, skip_mismatch=True)
# WARNING:tensorflow:Skipping loading of weights for layer predictions due to mismatch in shape ((1280, 1000) vs (1280, 21843)).
# WARNING:tensorflow:Skipping loading of weights for layer predictions due to mismatch in shape ((1000,) vs (21843,)).
model(np.ones([1, 772, 772, 3])).shape
# TensorShape([1, 1000])
I get this error when I try to put image with resolution 384 into m or l models : Input 0 is incompatible with layer model_8: expected shape=(None, 224, 224, 3), found shape=(1, 384, 384, 3)