kevinyaobytedance / llm_unlearn

LLM Unlearning
MIT License
125 stars 16 forks source link

OOM on larger models #1

Open fuzzall opened 1 year ago

fuzzall commented 1 year ago

Hi Authors,

Thank you for sharing your code! However, I run into an OOM error on larger models (such as Llama-7b or Vicuna-7b). I am using 80G A-100 GPUs. Could you share your configurations on these models? Thank you!

JaiDoshi commented 9 months ago

Were you able to resolve this?

Xiang-Pan commented 2 months ago

Fail with 4 A100s.