Open datar001 opened 2 months ago
You may check the instructions from Grounded-SAM for more details
I find that this problem does not hinder running codes. It can be able to train and evaluate the project. However, there are two additional minor errors in Line 51 of fuse_lora_close_form.py:
Today I trained a MACE following the default setting, and modified the "exase_explicit_content.yaml" for setting my concepts: After obtaining the trained checkpoint, I first evaluate the impact of the unlearning strategy on the "normal" prompt~(i.e., those not containing erased concepts). But I find a significant negative impact on these prompts. Some examples are as follows~(The generated images have the same seed):
Could you help me with this phenomenon?
Hi, to fit your case, probably you can try to reduce the max_training_step
from 120 to 50-60, and increase both the train_preserve_scale
and fuse_preserve_scale
to 1e-4 or 1e-5.
I re-trained a model following the recommended settings. Although the problem seems to have eased a bit, the negative impact is still significant. For example:
Try to further increase the train_preserve_scale
and fuse_preserve_scale
to 1e-3 or 1e-2. There is a tradeoff between generality and specificity.
Hello, I have installed most dependencies following your instructions. But in the last step installing recognize-anything by the command "pip install -e ./recognize-anything/", it meets an error:
Could you have some suggestions about it? Current environment is as follows: