Closed INF800 closed 2 years ago
Hi. You can find links to wandb runs on modelcards. Here is the one with best performance modelcard : logs
Here is a small report highlighting impact of augmentations on loss.
As you can see we also have elav_loss
> train_loss
which is unsurprising given a relatively small dataset. Also notice that in many cases early-to-mid training checkpoints showed best performance. The later ones suffered more from overfitting.
On the other hand it can also be concluded from our experiments this kind of fine-tuning/"domain adaptation" is highly beneficial for zero-shot classification performance.
It might also worth a shot to freeze some of the layers during, but we haven't done those experiments in this project.
Hi, I noticed that evaluation loss metric was too high compared to training loss in training notebook. Can you please share training logs for you best model(s) ?