AetherCortex / Llama-X

Open Academic Research on Improving LLaMA to SOTA LLM
Apache License 2.0
1.58k stars 101 forks source link

About llama-2-70B fine-tuning #31

Open RickMeow opened 10 months ago

RickMeow commented 10 months ago

Thanks for your amazing work!

I'm experiencing out-of-memory problems when using Llama-X's fine-tuning code to do Supervised fine-tuning for 70B (Llama-2-13B doesn't have this problem), using a configuration of 3 sets of 8*A100 (40G).

So would like to inquire about the training configuration used if possible, thanks a lot!