I'm experiencing out-of-memory problems when using Llama-X's fine-tuning code to do Supervised fine-tuning for 70B (Llama-2-13B doesn't have this problem), using a configuration of 3 sets of 8*A100 (40G).
So would like to inquire about the training configuration used if possible, thanks a lot!
Thanks for your amazing work!
I'm experiencing out-of-memory problems when using Llama-X's fine-tuning code to do Supervised fine-tuning for 70B (Llama-2-13B doesn't have this problem), using a configuration of 3 sets of 8*A100 (40G).
So would like to inquire about the training configuration used if possible, thanks a lot!