Closed cuge1995 closed 2 years ago
Hi, the loss of these attacks is cross-entropy loss.
Hi, the loss of these attacks is cross-entropy loss.
But the COCO2017 is not a classification dataset, how cross-entropy loss generated?
The motivation of our paper is cross-task transferability. The surrogate models we used are all classification models.
Thanks for quick reply, but for a coco img with multiple objects, what's the gt label when cal the cross-entropy loss?
The label is the classification result of the img.
Thanks for quick reply, but for a coco img with multiple objects, what's the gt label when cal the cross-entropy loss?
Thank you for the question. Please notice that baseline attacks are applied on surrogates, which are pretrained classifiers and irrelevant to either object detection victims or COCO dataset. We assume that the well trained backbones of detectors and classifiers share similar feature representations. Thus, for inference process, we first feed a benign image to the surrogate classifier, set the predicted class as ground truth and apply AE attacks based on CE loss. Then, we evaluate the generated AE via victim models, which are object detectors.
Thanks
Thanks for the great work. However, I can't find the detail how the exp in fig3 of PGD and MI-FGSM are performed? what's the loss of those attacks when doing attack?