Open zhongruizhe123 opened 3 months ago
24G of RAM can't fully weight finetune 7B models, which usually requires 4*80G GPUs. If you only have two 3090ti 24G I recommend you to use LoRA or QLORA, but this will lose a little bit of performance, if you don't know how to configure LORA and QLORA you can refer to the LoRA finetune script I configured https://github.com/fate-ubw/RAGLAB/blob/main/run/rag_train/script_finetune-llama3-8B-baseline-Lora.sh All you need to do is switch the training data and output model name
Ask the Critic training how much video memory is required at least, I used two 3090ti 24G, but to tip that video memory is not enough. Is there a way to adjust some parameters to make my program work