-
Since the current documentation lacks some important details for users and might make using foolbox harder than it has to be, we should improve this.
Let us collect suggestions for what to improve by…
-
**Is your feature request related to a problem? Please describe.**
What Doesn’t Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
https://arxiv.org/pdf/2102.13624.pdf…
-
Hi,I need to get my own samples.Can you tell how to make it?
-
## 論文リンク
https://arxiv.org/pdf/1511.04508.pdf
## 概要
Knowledge DistillationをAdversarial Exampleの防御手段として利用した論文
## 手法のキモ
生徒モデルを教師モデルと同じアーキテクチャとし、教師モデルの学習時と知識蒸留時は温度Tつきsoftmaxを通した後のクロスエン…
-
# General Comments
At the moment, your article opens with a short section giving a flavor of the content of the article:
![image](https://user-images.githubusercontent.com/61658/31201969-834b044…
colah updated
6 years ago
-
it seems that "gt=img_obj['TrueLabel'] - 1"
But why use **gt** in "attack.py" to generate adversarial examples but use **gt+1** in "verify.py" to get accuracy?
## **attack.py**
loss = F.cross_…
-
If I just want to train the SCPN model, I just need to preprocess the para-nmt dataset. But what if I want to use SCPN to generate syntactically adversarial examples for downstream task? Should I prep…
-
### Bug description
I expected manual_backward and .backward to perform backward propagation in the same way, but when I use self.manual_backward it results in a number of unused parameters. If I u…
-
This seems like a very important finding mentioned in your [blog](https://huggingface.co/blog/leaderboard-decodingtrust) and something deserving of further exposition.
Submitting your paper to Gemi…
-
Hi~, Do you use the 500k-TI as In-distribution when using 500K as OOD?