Shilin-LU / MACE

[CVPR 2024] "MACE: Mass Concept Erasure in Diffusion Models" (Official Implementation)
MIT License
272 stars 18 forks source link

Meeting Error when installing recognize-anything #4

Open datar001 opened 2 months ago

datar001 commented 2 months ago

Hello, I have installed most dependencies following your instructions. But in the last step installing recognize-anything by the command "pip install -e ./recognize-anything/", it meets an error:

image

Could you have some suggestions about it? Current environment is as follows: image image image

Shilin-LU commented 2 months ago

You may check the instructions from Grounded-SAM for more details

datar001 commented 2 months ago

I find that this problem does not hinder running codes. It can be able to train and evaluate the project. However, there are two additional minor errors in Line 51 of fuse_lora_close_form.py:

  1. The saved checkpoint in stage 1 & 2 (CFR and LoRA training) is old format, which is not loaded by new-version diffusers. Therefore, it should add the code to convert the old format to new format as follows: image
  2. In the same line, the folder name of the saved checkpoint is separated by space. So it does not add the post-process code ".replace(' ', '-')".
datar001 commented 2 months ago

Today I trained a MACE following the default setting, and modified the "exase_explicit_content.yaml" for setting my concepts: image After obtaining the trained checkpoint, I first evaluate the impact of the unlearning strategy on the "normal" prompt~(i.e., those not containing erased concepts). But I find a significant negative impact on these prompts. Some examples are as follows~(The generated images have the same seed): image image image

Could you help me with this phenomenon?

Shilin-LU commented 2 months ago

Hi, to fit your case, probably you can try to reduce the max_training_step from 120 to 50-60, and increase both the train_preserve_scale and fuse_preserve_scale to 1e-4 or 1e-5.

datar001 commented 2 months ago

I re-trained a model following the recommended settings. Although the problem seems to have eased a bit, the negative impact is still significant. For example: image image

Shilin-LU commented 2 months ago

Try to further increase the train_preserve_scale and fuse_preserve_scale to 1e-3 or 1e-2. There is a tradeoff between generality and specificity.