Closed Awyshw closed 1 year ago
When I use torchrun to finetune on A100 2node * 8 GPUs, OOM will appear. Can you release your params?
Hi @Awyshw,
It's in the README. I used 2 RTX8000 of 48Gb
May be https://github.com/tloen/alpaca-lora/pull/361 ?
When I use torchrun to finetune on A100 2node * 8 GPUs, OOM will appear. Can you release your params?