Closed HashmatShadab closed 2 months ago
Hi, Could you please clarify what you mean by “results for ZS adversarial robustness”? I am not sure about the difference you are referring to between "ZS adversarial performance" and "results for ZS adversarial robustness."
What i mean by Zero shot adversarial robustness is similar to how it mentioned in the paper titled "Understanding Zero-Shot Adversarial Robustness for Large-Scale Models". They adversarially finetune CLIP using ImageNet. Then evaluate the robust CLIP on adversarial examples crafted on downstream datsets on which it has not been adversarially trained.
If I understand correctly, you are referring to: performing AT on CLIP using ImageNet, and then testing robustness on the test sets of downstream datasets (adversarial versions of the test sets). This scenario is not within the scope of our paper because this setup is more stringent (it requires the model to not use the training sets of downstream datasets for fine-tuning). In fact, you can observe that the results of TeCoA are much lower compared to AdvPT.
Thank you for clarifying.
The comparison with Vanilla CLIP results in Table 1, is vanilla CLIP also fine tuned on downstream datasets or is it just pretrained CLIP model?
Pretrained CLIP model. If you are looking for comparisons with other methods that also utilize downstream datasets, you may refer to Figure 3 (CoOp) and Linear Prob (in the camera-ready version) for relevant results.
Hi, Thanks for sharing your work. I just needed some clarity in the experiments done to report results in Table 1.
So, if i understand correctly, results for ZS adversarial robustness(Flowers, Pets,...) are not evaluated?