Closed CharlesGong12 closed 5 months ago
Thank you for your interest in our work! We have further updated our readme, and we hope this proves helpful to you.
To begin, we need to generate the saliency mask using SDv1.4 and corresponding images as Df and Dr. After that, we use SalUn to forget NSFW-concept and get unlearned SDv1.4.
It's important to note that during evaluation, we utilize the unlearned SDv1.4 to generate I2P images and count the number of images with different areas of nudity. For additional details, please refer to Appendix B.1 in the paper, covering additional training and unlearning settings.
We hope this information is beneficial to you, and feel free to reach out if you have any further questions.
Thanks for your response and careful answers! Could you further provide the correct finetuned weights?
Sure! We plan to set up a website soon and make the relevant weights public. If finished, we'll update our repository promptly, so please stay tuned for our latest developments.
Hi, what an amazing work! I am trying to reproduce the algorithm recently but I encounter some problems. I use the weights you provided in SD/README.md and it is a file ends with ".ckpt". I try to load it but the keys are incorrect with SDv1.4 no matter I use
torch.load(ckpt)
ortorch.load(ckpt)['state_dict']
. Then I use the convertModels.py to convert it into "xx.pt". I useextract_ema=True
but it produces exactly same weights as the original SD unet. So I useextract_ema=False
and it successfully produces edited weights. However, when I generate I2P images with this weights, the results is far worse than the results in your paper. So could you help me address this issue? Thanks for your reply!