Open OpenAI-chn opened 1 year ago
Please modify the path of the corresponding pre-training weights in the config.
Sorry, I have re-updated the pre-training weights. Please download and try again. If you have more questions, please contact me.
Sorry, I have re-updated the pre-training weights. Please download and try again. If you have more questions, please contact me.
Hello, thank you very much for your help. The pre-training weight file this time did not throw any errors and effectively improved the performance on my own dataset B. Additionally, I'd like to ask you a question: After training Afformer-base with my self-constructed dataset A and using it as pre-training weights, why is there no significant improvement when fine-tuning on the small-scale dataset B?
My dataset A has around 20,000 images, while dataset B has a few hundred images. Is it because my dataset A is too small in scale? Were your pre-training weight files trained on ImageNet?
请问这个问题怎么解决
Traceback (most recent call last):
File "tools/train.py", line 250, in
mmseg - WARNING - The model and loaded state dict do not match exactly size mismatch for stem.0.conv.weight: copying a param with shape torch.Size([32, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3]). size mismatch for stem.0.bn.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]). size mismatch for stem.0.bn.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]). size mismatch for stem.0.bn.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]). 好像是预训练权重文件 和模型不匹配,请问这个怎么修改呢?