22-hours / cabrita

Finetuning InstructLLaMA with portuguese data
Apache License 2.0
558 stars 68 forks source link

About the training time on Google Colab A100 #11

Open ngoanpv opened 1 year ago

ngoanpv commented 1 year ago

Greeting, I have employed the identical configuration on your Google Colab notebook to perform fine-tuning on the alpaca dataset. However, I have observed a notable disparity in the training duration, which is considerably longer compared to your findings. The training process took approximately 1 hour to complete 70 steps, and the estimated time for 3 epochs is around 23 hours. I am eager to receive your insights regarding my situation. Thank you.

MICRO_BATCH_SIZE = 4  # this could actually be 5 but i like powers of 2
BATCH_SIZE = 128
GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
EPOCHS = 3  # we don't need 3 tbh
LEARNING_RATE = 3e-4  # the Karpathy constant
CUTOFF_LEN = 256  # 256 accounts for about 96% of the data
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT = 0.05
    trainer = transformers.Trainer(
    model=model,
    train_dataset=data["train"],
    args=transformers.TrainingArguments(
        per_device_train_batch_size=MICRO_BATCH_SIZE,
        gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
        warmup_steps=100,
        num_train_epochs=EPOCHS,
        learning_rate=LEARNING_RATE,
        fp16=True,
        logging_steps=20,
        output_dir="lora-alpaca",
        save_total_limit=3,
    ),
    data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train(resume_from_checkpoint=False)