-
1. NN 모델은 Linear한 특성을 갖음
2. FGSM 기법 소개
3. Adversarial training 기법 소개
4. 다른 architectures에 대해 transferability를 갖음
5. Radial basis function networks 와 같은 non-linear 네트워크는 어느정도 adv attack에 대해 방어 가능
…
-
As far as I understand, targeted misclassification is not yet implemented in FGSM (see [here](https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/fast_gradient_method.py#L90 ). Do you pla…
-
## 개요
* 공식 튜토리얼 중 https://github.com/pytorch/tutorials/commit/bc9cac0a77512136d91d717e3c8f1e83165b196d 를 참고하여 반영하였습니다.
* 1.6 버전이 되면 새롭게 프로토타입(prototypes)이 추가되었습니다.
* 현재 메인 메뉴에서 바로 접근이 불가능한 튜토리얼은 …
-
I encounter assert not strict or in_bounds foolbox, using Keras and cifar 10.
I can run it locally without this error but when I run it in the cluters, it happens.
Finally, I find that it is caused …
-
Your paper is interesting.
I have a question about your code. In your paper, you point out that β = 2.5, γ = 2 for DMPI-FGSM.
But I found the result is bad when running your code with such parameter…
-
**Please turn in your task 2 (refine) here, by providing **
* a link to your repo.
* team name and team members.
* the option you worked for the task 2.
* notes that one needs to know, in order to…
-
**Please turn in your task 1 (refine) here, by providing **
* a link to your repo.
* team name and team members.
* the option you worked for the task 1.
* notes that one needs to know, in order to…
-
Hi Dharma,
Your work and code look amazing to me, so I was trying to repeat your experiment. I basically can run the model training through, but about the detailed parameter optimization and valida…
-
When I run the exact script from adversarial_training_mnist.ipynb under notebooks, I get higher model accuracy on adversarial samples than expected.
![image](https://user-images.githubusercontent.…
-
Adversarial Machine Learning at Scale
1. Inception V3 모델 (large model)에 대해서 adversarial training을 성공하고 fgsm과 one-step method에 대해 robust함.
2. 서로 다른 adversarial examples은 모델간에 서로 다른 transferability …