-
Hello, I would like to know how the backdoor accuracy can reach 100% without defenses in the case of a semantic backdoor. When I perform a single attack by a single adversary after the model has conve…
-
hello!
Recently,I found some issues when trying to reproduce the experiment in Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks.
The output is a complex number result,…
-
Thank you for putting together this example, it has been tremendous learning for me and I'm sure many others.
Is `cargo build` expected to happen in the container? If so, at least in my system the …
-
Hello, thank you for sharing the code. You paper[1] provides a great insight to link image steganography to data poisoning.
And I have two questions about the comparison experiments with `BadNets…
-
# Abstract
You hear a lot about how great machine learning is, and about how AI will change the world this century, but what you don't tend to hear so much about are the *very* serious security vulne…
-
Hi, LOTUS is an interesting work, thank you for sharing!
I tried to reproduce the LOTUS performance of the defense method ```neural cleanse``` (settings referenced to their [code)](https://github.co…
-
i do not understand why the train_data need to randomcrop after padding 4 use "transforms.RandomCrop(32, padding=4)"
At the beginning, i thought that maybe for trigger setting,but i found that you m…
-
| Keywords | References | link |
|-------------------------------|-------------------------------|---------------------------------|
| D…
-
We currently have 3 detectors. In this issue I will investigate some possible new additions.
Top candidates:
- [ ] [Neural Cleanse](https://www.semanticscholar.org/paper/Neural-Cleanse%3A-Identif…
-
Hello, author. When I ran "python poison_model.py," I encountered the following error. Later, I tried running the code without data conversion (lines 105-106) and commenting out lines 74-77, and the c…