InternLM / InternLM-XComposer

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
2.06k stars 128 forks source link

server resources required for finetune a lora model #191

Open brooks0519 opened 4 months ago

brooks0519 commented 4 months ago

Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?

iFe1er commented 4 months ago
  1. same question here
  2. how to convert a finetuned model to a INT4 version manually? would be very appreciated if anyone can reply @yuhangzang
thonglv21 commented 3 months ago

~24GB VRAM, --batch_size 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --max_length 512 working on a 3090 nvidia.