-
https://devblogs.nvidia.com/parallelforall/photo-editing-generative-adversarial-networks-2/
基于nvidia-digits的实现
https://github.com/gheinrich/DIGITS-GAN/blob/DIGITS-GAN-v0.1/examples/gan/README.md
-
Using the data provided, it is not possible to compare the efficacy of different attacks across models. Imagine we would like to decide whether LLC or ILLC was the stronger attack on the CIFAR-10 data…
-
@cgreene suggested that we improve our adversarial training definition in the [imaging applications](https://greenelab.github.io/deep-review/#imaging-applications-in-healthcare) section. See Ian Good…
-
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their l_infinity norm c…
-
## 0. 論文
[Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning](https://arxiv.org/abs/1712.02051v2)
Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi…
-
Hello. Thank you for your excellent work.I have some questions about the statements in the paper and hope to receive your answers。In Table 3, you compared the differences between your method and other…
-
Here are some ideas and potential areas of research for Tensort:
- Model analysis and interpretability: Develop new techniques for analyzing and understanding what large language models have learned …
-
Hi,
first of all, I want to say that I enjoyed reading the paper and I think it’s a useful collection of best practices. I also like the “open” character of the paper, so I thought I would leave so…
-
Keywords: Detection, Adversarial Training
URL: https://arxiv.org/pdf/1710.03337.pdf
Interest: 3
#이런연구완전좋아 #저자가이쪽을쭉연구했군 #설마이거CVPR2018버전이야?
-
Thanks for your perfect work!
As a junior student, I have learned a lot from your work. And I would like to know a little more in depth.
In your source code file "white_attack.py" and "evaluate_ad…