unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.12k stars 1.26k forks source link

Issue with running trainer.train() #926

Open dilerbatu opened 3 months ago

dilerbatu commented 3 months ago

Hey Everyone,

I was trying to finetune gemma-2-2b-it with my local PC which has got A3000 GPU inside. I have followed conda install method.

This is my trainer:

``trainer = SFTTrainer( model = model, tokenizer = tokenizer, train_dataset = dataset, dataset_text_field = "text", max_seq_length = max_seq_length, dataset_num_proc = 2, packing = False, # Can make training 5x faster for short sequences. args = TrainingArguments( per_device_train_batch_size = 2, gradient_accumulation_steps = 4, warmup_steps = 5,

max_steps = 60,

    num_train_epochs = 10,
    learning_rate = 2e-4,
    fp16 = not is_bfloat16_supported(),
    bf16 = is_bfloat16_supported(),
    logging_steps = 1,
    optim = "adamw_8bit",
    weight_decay = 0.01,
    lr_scheduler_type = "linear",
    seed = 3407,
    output_dir = "outputs",
),

)``

When I start the training process, I got this error "TypeError: is_bf16_supported() got an unexpected keyword argument 'including_emulation'"

Did anyone see this error before ? Thanks.

danielhanchen commented 2 months ago

Just fixed apologies - please reinstall Unsloth via

pip uninstall unsloth -y
pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"