Closed rikonaka closed 1 year ago
On 64 CIFAR10 test datasets, the EADL1 untargeted attack accuracy drop to 1.56%, and the EADEN untargeted attack accuracy also drop to 1.56%.
I did not see any target attack method in paper...the value $t$ is not used in the Algorithm 1...but I still supported targeted attack in code, even though the accuracy rate would only drop to 64%. 🤨
Merging #137 (c9e9531) into master (9a22433) will increase coverage by
1.83%
. The diff coverage is90.70%
.:exclamation: Current head c9e9531 differs from pull request most recent head 5fe65ec. Consider uploading reports for the commit 5fe65ec to get more accurate results
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## master #137 +/- ##
==========================================
+ Coverage 71.25% 73.09% +1.83%
==========================================
Files 41 43 +2
Lines 3500 3787 +287
Branches 500 535 +35
==========================================
+ Hits 2494 2768 +274
- Misses 852 857 +5
- Partials 154 162 +8
Impacted Files | Coverage Δ | |
---|---|---|
torchattacks/attack.py | 47.98% <16.00%> (+0.05%) |
:arrow_up: |
torchattacks/attacks/square.py | 52.98% <33.33%> (ø) |
|
torchattacks/attacks/pixle.py | 58.12% <60.00%> (+0.91%) |
:arrow_up: |
torchattacks/attacks/upgd.py | 62.82% <75.00%> (ø) |
|
torchattacks/attacks/eaden.py | 95.55% <95.55%> (ø) |
|
torchattacks/attacks/eadl1.py | 97.08% <97.08%> (ø) |
|
torchattacks/__init__.py | 100.00% <100.00%> (ø) |
|
torchattacks/attacks/apgd.py | 76.30% <100.00%> (ø) |
|
torchattacks/attacks/apgdt.py | 86.25% <100.00%> (ø) |
|
torchattacks/attacks/autoattack.py | 81.25% <100.00%> (ø) |
|
... and 27 more |
Continue to review full report in Codecov by Sentry.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 9a22433...5fe65ec. Read the comment docs.
I'm surprised that you were already fixed the normalization part 😄.
The current validation check process has some issues when we use normalized_used=True
,
such as ValueError: Input must have a range [0, 1] (max: 1.0, min: -2.9802322387695312e-08)
.
I'm working on it.
BTW, there is an error during the coverage check. Could you fix the problem? :
code_coverage/test_atks.py::test_atks_on_cifar10[OnePixel]
/home/runner/work/adversarial-attacks-pytorch/adversarial-attacks-pytorch/torchattacks/attacks/_differential_evolution.py:592: RuntimeWarning: divide by zero encountered in double_scalars
convergence=self.tol / convergence) is True):
Hi @Harry24k , I could not reproduce the same error that caused when I run OnePixel
attack 😂.
It's looks like an error in the _differential_evolution.py
file not OnePixel
.
I cannot find the cause of the error, so I merge it and will update it!
PR Type and Checklist
What kind of change does this PR introduce?
_check_inputs
and_check_outputs
(also normalize) function in classAttack
's__call__
function instead of every forward function (also remove all from forward function) in https://github.com/Harry24k/adversarial-attacks-pytorch/pull/137/commits/5fe65ec2fe01b8a57699d2cfb952d4dc02c2394f.DeepFool
attack has one line bug in here, in fact, the value oflabel
could be 9 but the length ofvalue
only 8 on CIFAR10, we don't need to get all non-label values from the jacobian matrix,fs-fs[label]
will make the label row become 0, and set label rowinf
next then pick min value is not affect. I also modified deepfool's main loop to make the code lean and easy to read in https://github.com/Harry24k/adversarial-attacks-pytorch/pull/137/commits/9801db87f51ef225a0b8438132e7e044ac3481a5.JSMA
andSPSA
parameters describe.UPGD
attackone_hot_labels
error (same as previousCW
bug).normalization_used
and_normalization_applied
in__init__
function.get_logits
function how to check_normalization_applied
.