Open junjie1003 opened 1 year ago
Hi Junjie, sorry for the confusion. Thanks for pointing this out and this should be a typo. We will correct this. The commands for reprogramming are aligned with the one for adversarial training. The first one is without --adversary-with-y
and the second one is with that.
Thank you for answering my questions. I have one more question. Do you have any other code implementations for the post-processing baselines mentioned in your paper? Thank you!
Hi Junjie, please refer to https://github.com/Trusted-AI/AIF360 for these baselines.
Hi Junjie, please refer to https://github.com/Trusted-AI/AIF360 for these baselines.
Thank you for your suggestion! I have reviewed the code of AIF360, and it seems that the sample code for post-processing baselines is mainly designed for tabular datasets like adult
and census
. In order to replicate the results of these post-processing methods on the facial image dataset CelebA as presented in your paper, could you please offer me your implementations adapted to CelebA? This would greatly facilitate my efforts to compare these post-processing methods with the foundational ERM method, aiming to ensure maximum consistency with the outcomes reported in the paper. Also, I'm curious about the roptim
parameter in the method
configuration. I know repro
corresponds to the border
method, and rpatch
corresponds to the patch
method. Thank you!
Hello, regarding your training command here, I have a question. About reprogramming, you provided two commands, and the only difference between them is that the command
--adversary-with-y
is repeated twice in the second command. However, isn't the purpose of this command to specify whether it is EO (true) or DP (false)? From this perspective, these two commands actually mean the same thing, so why are there two commands? Thank you!