UCSC-VLAA / AdvXL

[CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"
15 stars 1 forks source link

Regarding Pretraining and Finetuing? #2

Open HashmatShadab opened 2 months ago

HashmatShadab commented 2 months ago

In Table 3 & 4, is the same dataset used during pre-training and fine-tuning? Or does the fine-tuning only happened on ImageNet-1k dataset?

zw615 commented 2 months ago

In Table 2-4, the large-size strong-attack fine-tuning only happens on ImageNet-1K, and the pre-training dataset varies from ImageNet-1K to DataComp-1B.

HashmatShadab commented 2 months ago

image

So in the above experiments, pretraining is done on LAION for ViT-L and DataComp for ViT-H? After that finetuning is done on ImageNet-1k?

If it's the case, will the pretraining weights be made available?

zw615 commented 2 months ago

Yes, your understanding is correct.

For the pre-trained weights, we have not really looked into that. Are you trying to fine-tune them on your own downstream tasks? I'll take a look at it once I have got the time.

HashmatShadab commented 2 months ago

I would be interested in exploring the pre-training robustness, not specifically fine-tuning. Please let me know whenever you get the time when the pre-training weights will be released.

HashmatShadab commented 1 month ago

Hi @zw615 , can you please provide any update on this?

HashmatShadab commented 2 weeks ago

Hi, can you please provide an update on this?