Closed ma-kjh closed 7 months ago
Hi @ma-kjh , For ImageNet, we indeed ended up training using the whole data and reporting the best test accuracy, which always is the last checkpoint. For imagenet, we never observed an overfitting phenomenon where the test accuracy was higher at some intermediate stage.
To reproduce the numbers, you can use the scripts in the README
Thank you for your great work.
I have a question for splitting train and validation dataset in ImageNet-1k dataset.
In the paper, you mentioned 80:20 train:val, but the code (ImageNet) was not splitted.
How can I reproduce FLYP imageNet 82.6% accuracy (OpenAI CLIP ViT-B/16) ?