-
[Adversarial label-flipping attack and defense for graph neural networks](https://ieeexplore.ieee.org/abstract/document/9338299/)
[Adversarial_Label-Flipping_Attack_and_Defense_for_Graph_Neural_Netwo…
-
B.3. Evaluating Defense Methods Table 6: Comparing the "Robust" PSPNet from Xu et al., 2021 against white-box adversarial attacks.
For the DDC-AT, did you conduct adversarial training using SegPGD …
-
https://dl.acm.org/doi/pdf/10.1145/3580305.3599335
```bib
@inproceedings{jia2023enhancing,
title={Enhancing node-level adversarial defenses by lipschitz regularization of graph neural networks},
…
-
my PyTorch version is 1.0.0 and version torchvision is 0.2.1
````
$python main.py --data_test Demo --scale 2 --pre_train download --test_only --save_results
Traceback (most recent call last):
Fi…
-
What prevents you from directly check the response for the input in the first place, if the response is safe, then return this to the user, if not, then just reject to answer? Why bother to use this e…
-
I've been working with DPBGA and have encountered some issues that I'd like to clarify:
**ASR Drops to Zero with Different Target Class:**
When I change the target class (e.g., to Flickr), the A…
-
Hi, thanks for your benchmark work and the open source code. I have a question about the Hybrid-training part of the code.
In the code, adversarial point clouds produced by pgd,drop and add are mix…
-
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/2104.09284)
## 公開日(yyyy/mm/dd)
2021/04/19
## 概要
## TeX
```
% yyyy/mm/dd
@inproceedings{
yu2021leafeat,
title={LAFEAT: Piercing Throug…
-
## 一言でいうと
PyTorch用の敵対的学習ライブラリ。DNNベースの画像分類器に対する10以上の攻撃手法と8つの防御手法、および、GNNに対する9つの攻撃手法と4つの防御手法を検証することが可能。オープンソースで公開されている。
![DeepRobust](https://user-images.githubusercontent.com/12124329/82116499-3f8f…