Open hxssgaa opened 6 months ago
Is there a plan to support PEFT methods like LoRA training in maxtext to support larger model fine-tuning / continue pretraining so that bigger models like LLaMA-3-70B can be trainined even with small amount of TPU/GPUs?
Any updates on when LoRA support would be available?
This is on our roadmap with high priority, wil update here once we start working on it
Is there a plan to support PEFT methods like LoRA training in maxtext to support larger model fine-tuning / continue pretraining so that bigger models like LLaMA-3-70B can be trainined even with small amount of TPU/GPUs?