Open lamnt-nd994 opened 1 month ago
Those shared weights are basic model only. Try loading using the basic_model
without header, not providing to tt:
import models
mm = models.buildin_models(
"r18", # Or "r18" / "r34" / "r100"
dropout=0,
emb_shape=512,
output_layer='E',
bn_momentum=0.9,
bn_epsilon=1e-5,
use_bias=True,
scale=True,
activation='PReLU'
)
mm.load_weights('glint360k_cosface_r18_fp16_0.1.h5')
...
Thank you for your response. I have successfully run it. Could you please explain why the accuracy is very low when training on the https://github.com/X-zhangyang/Asian-Face-Image-Dataset-AFD-dataset and testing on LFW, CFP_FP, and AgeDB_30? with lr= 0.1
Try freezing backbone and training header only first:
tt.train(
[
{"loss": losses.ArcfaceLoss(scale=64), "epoch": 2, "bottleneckOnly": True},
{"loss": losses.ArcfaceLoss(scale=64), "epoch": 17},
]
)
And also use a smaller learning_rate like lr_base=0.025
.
I tried freezing backbone and training header only first but accuracy in "cfp_fp" was only 0.93 while Ported Models r18: 0.977143
Maybe just a difference in image encoding qulity when saving those bin files, refer Reproduce the results #110. While freezing backbone, the accuracy shouldn't change. May test the basic_model accuracy firstly.
I got it. Thank a lot
“Hello, I'm a beginner . I’m encountering an error when loading pretrained weights
![Screenshot 2024-05-29 002602](https://github.com/leondgarse/Keras_insightface/assets/72916030/0a66a454-6041-403b-966e-d6b8e7add455)
glint360k_r18
to continue training with my new dataset. The error message is: ‘ValueError: Layer count mismatch when loading weights from file. Model expected 63 layers, found 62 saved layers.’”Thanks