-
The only meaningful metric for evaluating a defense is by measuring the effectiveness of attacks which run against it.
This paper does not actually measure this, however. It generates adversarial …
-
I really appreciate yor nice work! But I think there is an error: the attack is performed on the GT label and not on the label predicted by the model on the clean image.
Hope to get your answer, t…
-
Thank you very much for sharing the code for this work! However, in the attack_utils.py
`from data_utils import wav2mel_tensor, Transform`,
I meet the error that cannot find reference `Transform` i…
-
# Overview
This proposal seeks to integrate MITRE ATLAS tactics and techniques into the existing D3FEND ontology to enhance the representational fidelity of AI threats within the model. The ATLAS fra…
-
Hi,
Thanks for the effort of publishing this significant code base etc.!
In Appendix B1 you write:
> In addition to the main Llama 2 model used for evaluation, we also release HarmBench with …
-
Hi,
I've run mnist.py on a single Titan X (Pascal) with the default settings.
However, the speed is much slower(x3) than that reported in the literature (Table 1).
[Scaling provable adversarial def…
-
ML security, or any security field in general, is going to have cases where papers make a certain claim, and later, that claim ends up being invalidated. For example, we once thought [MD4](https://en.…
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/1704.01155)
## 公開日(yyyy/mm/dd)
2017/04/04
NDSS 2018
## 概要
## TeX
```
% 2017/04/04
@inproceedings{
xu2018feature,
title={Feature Sque…
-
If any team wants further clarification regarding the comments or disagree with the grade, you can use this issue to follow up with @MENG2010 or @pooyanjamshidi.
-
Aleksander, Wieland, Nicholas and I have had some discussions about the lack of "self-correction" among (broken) defense papers, and how this can make it hard for newcomers to navigate the field (i.e.…