jinhaoduan / SecMI

[ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?
MIT License
29 stars 4 forks source link

Inquiry Regarding Experimental Setup in LDM SecMI #6

Closed zhaisf closed 5 months ago

zhaisf commented 5 months ago

Thank you for your great work and open-source code, which inspires me a lot.

During replication, there was a slight disparity between my results and yours (with ASR, AUC even higher on the Pokémon dataset than yours). So I want to know which differing settings led to my higher results.

My setting: Pokmon train-test split: 416, 417. Training steps: 15000, Batch size: 1, Gradient_accumulation_steps: 4, LR: 1e-5. Without Crop and Flip. (Did you use crop and flip during training?)

My result ASR 0.90, AUC 0.9391 with Prompt (higher than yours: 0.821, 0.891)

Trying to keep the settings consistent with the paper, but I still obtained different results. Looking forward to your response!

jinhaoduan commented 5 months ago

We didn't use Crop and Flip during the finetune. For batch size, we use 1 batch * 8 GPUs. Did you use the same member/non-member splits as us? Since there are only around 400 training samples, it may introduce a certain variance.

zhaisf commented 5 months ago

Thank you for your prompt reply!

I see. Using different member/non-member splits could be the reason for the different results.