Closed tempbrucefu closed 1 week ago
for AffectNet8_Maxvit_VA, it is close to the report result, I got 0.75 of ccc after using the default NUM_EPOCHS = 20
Hi @tempbrucefu, thanks for sharing your information. Your metholody of not using a fine-tuning approach and your insights are very interesting. Without a dedicated pre-trained model, we also noticed quite unstable gradients for only VA. As known from literature, keep in mind that such training could require way more epochs than 25 to achieve similar results and especially different architectures operate different.
Best regards
not using the pretrained model from AffectNet8_Swin_Combined, the codes are as following on AffectNet8_Swin_VA,
MODEL = models.swin_v2_t(weights="DEFAULT")
MODEL.head = torch.nn.Linear(in_features=768, out_features=10, bias=True)
The performance on NUM_EPOCHS = 25 only has 0.51