-
https://github.com/MadryLab/mnist_challenge
Hello, I am trying to attack MadryLab's defense strategy with your project (advGAN). However, it did not get the 92.76% effect mentioned in the article (…
-
Perhaps the one key factor that differentiates security (and adversarial robustness) from other general forms of robustness is the worst-case mindset from which we evaluate. This paper uses the mean t…
-
## 論文リンク
https://arxiv.org/pdf/1511.04508.pdf
## 概要
Knowledge DistillationをAdversarial Exampleの防御手段として利用した論文
## 手法のキモ
生徒モデルを教師モデルと同じアーキテクチャとし、教師モデルの学習時と知識蒸留時は温度Tつきsoftmaxを通した後のクロスエン…
-
This framework is designed to "systematically evaluate the existing adversarial attack and defense methods". The research community would be well served by such an analysis. When new defenses are prop…
-
I wanna to know how to use my own picture dataset to get the model which defenses Adversarial Attacks.How could I change the code to train my own dataset?
-
Wanted to ask about how to verify performance against samples for defense methods, you didn't quite understand that part. Looking forward to your reply, thanks a million.
-
Security is all about *worst*-case guarantees. Despite this fact, the paper makes many of the inferences by looking at the *average*-case robustness.
This is fundamentally flawed.
If a defense…
-
> The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to a…
-
Hi,
Thanks for you nice work in CVPR 2019. It's really interesting and provides strong results.
However, I found the adversarial example generation process is not clearly described in both paper…
-
## 論文リンク
https://arxiv.org/abs/1910.00470
## 公開日(yyyy/mm/dd)
2019/10/01
## 概要
AEsを検出する手法を提案。異なるネットワークの層で特異な特徴表現をするサンプルを棄却する。この検出手法に対する新しい攻撃手法を提案し評価した。提案手法は既存手法よりも高精度だった。
## メモ
- 棄却に使用する特徴…