epfLLM / Megatron-LLM

distributed trainer for LLMs
Other
529 stars 76 forks source link

dose 8 A100 80g enough to finetune 70b llama2 ? #52

Closed james2v closed 1 year ago

panx27 commented 1 year ago

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

james2v commented 1 year ago

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

AleHD commented 1 year ago

Correct, 32x 80GB is the minimum requirement we have been able to achieve when using sequence length of 4k

JumpingRain commented 1 year ago

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

can you run llama2 70b with lora.

james2v commented 1 year ago

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

can you run llama2 70b with lora.

i load it in 8 bit to train it. but i use other hud to run it