Open persistz opened 3 years ago
I totally agree with @persistz . I am confused that in the paper, the NSGA-II is used to solve the optimization problem but I cannot really find it in the code. I would like to kindly ask @vtddggg to help us to understand the code better.
Besides, I also have a small question about the paper. I have difficulty understanding what is actually the multi-objects. In the paper, the authors point out that they use eq. 3 as the evaluation function. But it is only one object. Sadly, I cannot find a more clear explanation of other objects. Since NSGA-II is an algorithm for multi-objects, I guess maybe it means different targeted classes. (I did not find anything to support my assumption.) I tried to find the answer from the code, but it is even shady for me. I am sorry I cannot get it, could you please tell me the answer? @vtddggg
Thank you in advance.
Hanwei
Hi, Hanwei
For NSGA-II please refer to https://github.com/vtddggg/CAA/issues/3#issuecomment-878057799. This code merely contains the implementation of the final searched attacks.
eq. 3 has two objectives: 1) first term optimizes attack strength; 2) second term optimizes attack steps (complexity). They are balanced by $\alpha$. Sorry for that you cannot find this implementation from the code, it may be because that we didn't open the part of NSGA-II search.
Thanks for your attention!!
Thank you very much! Now I understand better. @vtddggg
It is interesting. Just for discussion. If you use \alpha to balance the two objectives, you actually directly merge the two objectives into one. You don't really need multi-objective optimization to solve it, single-objective optimization is good enough. If you define a series of different values of \alpha during the search, then it will be closer to another algorithm MOEA/D. NSGA-II is a multi-objective optimization algorithm. You just need to give these two objectives separately and it will return the Pareto set, where you cannot tell which solution is the best according to the two objectives. If you are using NSGA-II, then how do you choose the best one solution out of it (or do you keep a set)? If you actually optimize eq. 3 with fix \alpha, it is a single-objective problem. How do you solve it with NSGA-II?
Thank you in advance, Hanwei
Hi, Hanwei
We use the NSGA-II implementation in pymoo, and we follow this to define our optimization problem. As you can see in line 90, line 92 and line 94, there are three objectives (f1, f2, f3
) are defined. Similarly, in our case, f1
will be accuracy and f2
will be complexity.
By feed f1
and f2
into from pymoo.algorithms.nsga2 import NSGA2
, we think the problem is optimized by multi-objective. For detailed implementation of NSGA2 you can also refer to https://github.com/anyoptimization/pymoo/blob/main/pymoo/algorithms/moo/nsga2.py
Ah ha! Got it! Thank you for your informative reply. @vtddggg Hanwei
In the paper, the authors pointed out that they found the best permutation for subsequent iterations. And in the implementation, the authors used part of the Auto-Attack code. But in fact, the key step of ‘find the best permutation’ seems to be missing from the implementation.
Using MultiTargetedAttack as an example. In line 1365 of
attack_ops.py
, for functionrun_once
, the code only returnsx_best_adv
, instead of returningx_best
like Auto-Attack. After searching for the assignment operation ofx_best_adv
, it is not difficult to find that this will cause only random noise to be returned for those examples that fail the attack. The same kind of errors will also occur on the return ofnow_p
.Due to the large amount of code, I cannot be sure this is a bug and whether other techniques have been used elsewhere to solve this problem. If my understanding is wrong, please point it out. Thanks.