artidoro / qlora

QLoRA: Efficient Finetuning of Quantized LLMs
https://arxiv.org/abs/2305.14314
MIT License
9.74k stars 800 forks source link

epoch presented does not match the calculation #252

Open lijierui opened 11 months ago

lijierui commented 11 months ago

I'm training on my custom training set with 9k+ data points using the following hyperparameters on 3 GPUs: --per_device_train_batch_size 3 \ --gradient_accumulation_steps 1 \ --max_steps 1600 \

Which should be 1600*3*3 = 14400 around 1.6 epoch when finished. However, the log only shows 'epoch': 0.52 at the end, looks like it does not take 3 GPUs into account. Is such information accurate?