-
Hi, thanks for your benchmark work and the open source code. I have a question about the AT part of the code.
In your paper, the section on adversarial training references the paper 'Extending adv…
-
A key work in this area is https://github.com/YingzhenLi/Dropout_BBalpha
A problem with implementing this method here is that it needs model gradients. Either we could build a task that supports m…
-
https://arxiv.org/pdf/1801.02610.pdf
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/1912.11969)
## 公開日(yyyy/mm/dd)
2019/12/27
## 概要
## TeX
```
% 2019/12/27
@inproceedings{
zheng2020efficient,
title={Efficient Adversar…
-
Thank you for your code! The idea of the paper is interesting and the result is competitive. I find that your method demonstrates great performance in detecting the adversarial examples generated by y…
-
## 一言でいうと
「自然な」Adversarial Exampleを生成する研究。自然なというのは単なるノイズではなく実際のデータとしてあり得そうな変動を乗せるという意。一般的にはデータに直接ノイズを入れるが、こちらではGANの潜在空間上で入力に近い+誤認識を誘う表現を探し生成する。これで自然言語でのExample生成も可能
### 論文リンク
https://arxiv.or…
-
### 論文へのリンク
[[arXiv:1911.09665] Adversarial Examples Improve Image Recognition](https://arxiv.org/abs/1911.09665)
### 著者・所属機関
Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, Q…
-
Hi, thanks for your great work.
I am confused about robustness to noisy interactions in this paper.
> Towards this end, we contaminate the training set by adding a certain proportion of adversari…
-
Hi Indu,
Thank you for your wonderful work! This work is quite interesting to me and I think the results are amazing. However, I was confused when I tried applying this method to my own dataset. I …
-
In Section 5.3 in paragraph _Robustness Measurement_, you defined the mAP ratio as " the ratio of IoU on adversarial examples to that on clean point cloud over the whole validation set". Isn't it the …