-
ICLR17: https://arxiv.org/abs/1605.07725
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learnin…
-
In original paper, the line 9 of Algorithm 1 iter the images all in C_i to optim G. However, in Line 60 in main.py (function train_gnet), it is seem like iter all images in dataset.
Is there somethin…
-
Config: olid.gin
Input: sentence level feature importance for id 1766
Parsed: filter id 1766 and nlpattribute sentence [e]
Traceback:
[2023-06-18 10:43:13,641] INFO in flask_app: Traceback getti…
-
**Describe the bug**
For all functions in `adversarial-robustness-toolbox/blob/main/art/attacks/poisoning/perturbations/image_perturbations.py`, the height and width dimensions are swapped. For [Nump…
-
Zero-shot:
1. [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/pdf/2112.07899.pdf)(Ni et al., 2021, arXiv, DTR)change to EMNLP2022
2. GPL: Generative pseudo labeling for unsuperv…
-
-
Thanks for your very interesting paper. It is a good work. However, I found you use the eigenvalues to calculate the effective rank, instead of the singular values in its paper [1]. I am not sure whet…
-
Hello,
I was trying to use NSL to implement adversarial training on my custom model, so I followed the default steps in the tutorial video, which worked like a charm. While studying the code, I notic…
-
https://github.com/JunfengGo/AEVA-Blackbox-Backdoor-Detection-main/blob/9e2fc44573f665097bd555abbc3ffb1ef06fee0c/outlier.py#L18-L38
Hi,
Thanks for sharing this wonderful work!
I am trying to …
-
Hello,
I have followed the tutorial of the adversarial training, but now I face an issue when testing the robustness of the models. In the "Robustness under Adversarial perturbations" section, both t…