-
Hello,
I would like to implement the [universal adversarial perturbations](https://arxiv.org/abs/1610.08401) algorithm in this library. This library was designed to compute perturbed images, but th…
-
https://yanghaozhang.com/2020/08/02/universal.html
-
# [ Adversarial Attack and Example ]
### 1. Adversarial Attack에 대한 전반적인 소개
### 2. 리뷰할 논문
- [Intriguing properties of neural networks](https://arxiv.org/abs/1312.6199)
- [Explaining and Harne…
-
**Is your feature request related to a problem? Please describe.**
There are a few speech recognition attacks in ART but lack of audio classification attacks.
**Describe the solution you'd like**
…
-
Please post here interesting papers, possibly with a brief description of technique and results (improvement).
-
https://doi.org/10.1101/262501
> Recent advances have enabled gene expression profiling of single cells at lower cost. As more data is produced there is an increasing need to integrate diverse data…
-
First Issue: The scipy.misc module has been deprecated since Scipy version 1.0.0 and the 'imread' and 'imresize' functions are no longer available in the scipy.misc module.
Fix: In the **prepare_imag…
-
After read through the example, can I simply think that you are trying to train a model to addicted to one target label, so that when predicting non-target samples but added with this noise, the poiso…
-
## Keyword: out of distribution detection
There is no result
## Keyword: out-of-distribution detection
There is no result
## Keyword: expected calibration error
There is no result
## Keyword: overc…
-
Can you verify that these two images collide?
![beagle360](https://user-images.githubusercontent.com/1328/129860794-e7eb0132-d929-4c9d-b92e-4e4faba9e849.png)
![collision](https://user-images.githubu…