Open Rain-yj opened 5 days ago
Why did I change the model-args. rank to 32 when training llama-7b on the gsm8k task, but the lora rank of the model I printed is still 64
I changed both places to 32, but the printed ones are still 64
Why did I change the model-args. rank to 32 when training llama-7b on the gsm8k task, but the lora rank of the model I printed is still 64