Open HashmatShadab opened 2 months ago
In Table 2-4, the large-size strong-attack fine-tuning only happens on ImageNet-1K, and the pre-training dataset varies from ImageNet-1K to DataComp-1B.
So in the above experiments, pretraining is done on LAION for ViT-L and DataComp for ViT-H? After that finetuning is done on ImageNet-1k?
If it's the case, will the pretraining weights be made available?
Yes, your understanding is correct.
For the pre-trained weights, we have not really looked into that. Are you trying to fine-tune them on your own downstream tasks? I'll take a look at it once I have got the time.
I would be interested in exploring the pre-training robustness, not specifically fine-tuning. Please let me know whenever you get the time when the pre-training weights will be released.
Hi @zw615 , can you please provide any update on this?
Hi, can you please provide an update on this?
In Table 3 & 4, is the same dataset used during pre-training and fine-tuning? Or does the fine-tuning only happened on ImageNet-1k dataset?