Open brooks0519 opened 4 months ago
Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?
~24GB VRAM, --batch_size 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --max_length 512 working on a 3090 nvidia.
Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?