Closed piotrmigdalek closed 3 months ago
I suggest you set fp16 to false, as setting it to true often results in unexpected outcomes.
Sure thanks. One more thing. Is the R-ROME implementation supporting llama2-7b and gpt-j as in hparams or basically all the models as stated in README?
Yes, all are supported. You can just modify the parameters a bit.
Thnaks a lot! Now I changed fp16 to false. I'm exceeding 50gb of memory (2 x RTX 4090) for Mistral-7b. It barely worked on llama2-7b taking over 46gb. Should it be as memory consuming to run ROME?
Same happened for KN model. Just running out of memory with 50gb in paralell setup on llama-7b.
I don't think there's any problem; this is the normal memory usage. You can try setting it to fp16 at https://github.com/zjunlp/EasyEdit/blob/38c5c34d614646db14a45f59790434bc2f520c1b/easyeditor/editors/editor.py#L63, which will slightly reduce the memory usage.
Hi, do you have any further questions?
Hi, thank you for your answer. Yes, maybe one more. How exactly in current implementation for example when I use editor with ROME locality, probability, rewrite and rephrase accuracies are calculated?
You can check the file at easyeditor/evaluate/evaluate_utils.py. Except for locality, we calculate the number of matching tokens and then take the average for everything else. Locality calculates whether unrelated inputs have changed before and after the edits.
Ok, thank you very much :)
Got this error today editing with llama2-7b with model_parallel: true and fp16: true. I assume the error is likely due to fp16 option?