Open breakices opened 1 month ago
The problem was solved when I switched to two 48GB L20s! However, one of them was fully used, the other one still has about 30 GB to spare.
Firstly, thank you very much for your impressive work! However I am having problems with out of memory when trying to run baseline_easyedit.py. I have experimented with two and three 24GB 4090's respectively, but they all come up out of memory. I noticed that this could be caused by the ROME algorithm not running in parallel, as card 0 would be out of memory when executing compute_u.py->layer_states.py, while cards 1 and 2 still had 12G of free memory. I used vicuna-7b and configured the two corresponding yaml files in the Config folder before running. I'd like to know your experimental setup if possible, or if it's convenient for you, please help me with it as well. I would very much like to refer to your code as it is very inspiring. Thank you for your patience!
My GPU is also a few 4090. I encountered the same problem. My approach is reducing the batch_token in KnowledgeSpread/simulation/easyeditor/models/rome/layer_stats.py to 35000
Firstly, thank you very much for your impressive work! However I am having problems with out of memory when trying to run baseline_easyedit.py. I have experimented with two and three 24GB 4090's respectively, but they all come up out of memory. I noticed that this could be caused by the ROME algorithm not running in parallel, as card 0 would be out of memory when executing compute_u.py->layer_states.py, while cards 1 and 2 still had 12G of free memory. I used vicuna-7b and configured the two corresponding yaml files in the Config folder before running. I'd like to know your experimental setup if possible, or if it's convenient for you, please help me with it as well. I would very much like to refer to your code as it is very inspiring. Thank you for your patience!