Closed tqch closed 6 months ago
Thank you for your interest and appreciation of our work. We want to assure you that the implementation of our local code is accurate. As outlined in the paper, during the training process, we utilize 10 classes to establish a conditional model as original model. We apologize for any inconvenience caused by the delayed code update and have promptly synchronized it to the latest version. The images presented here are the result of our well-trained model. In this visualization, each row corresponds to a specific class. Notably, the conditional model demonstrates its capability to generate images aligned with the specified label when a particular label is input.
We trust that these explanations address any concerns you may have and prove valuable in your ongoing experiments. Should you have further inquiries or require additional assistance, please don't hesitate to contact us.
Thank you for your prompt and detailed response!
Hi there,
Thank you for sharing your great work! I have a quick question regarding the training of CIFAR10 conditional diffusion. Based on the code: https://github.com/OPTML-Group/Unlearn-Saliency/blob/48824303a19d2a89508d22bc3a5ca5a801f81629/DDPM/train.py#L27-L32 and
runner.train()
https://github.com/OPTML-Group/Unlearn-Saliency/blob/48824303a19d2a89508d22bc3a5ca5a801f81629/DDPM/runners/diffusion.py#L193-L200 https://github.com/OPTML-Group/Unlearn-Saliency/blob/48824303a19d2a89508d22bc3a5ca5a801f81629/DDPM/runners/diffusion.py#L217-L219 The label 0 is forgotten even for standard training. Consequently,does not yield a complete conditional model on all 10 classes. Is my understanding correct?
If so, could the authors verify that their results of CIFAR-10 class-wise unlearning in the paper are unaffected and based on the correct implementation.