Open fyang064 opened 1 year ago
If your dataset has similar categories as COCO, you can load the corresponding class embedding. But overall class embedding is easy to learn, you can just randomly pick some class embedding in the 91 classes (i.e., the first n classes in 91) or just don't load this parameter.
Hello,
I'm wondering how to fix the loading state_dict issue while trying to use pre-trained model. Once the num_classes of the custom dataset has changed from 91, size mismatches for transformer.decoder.class_embed. Is there any better way to implement the transfer learning?
Thanks, Felix