unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.37k stars 1.28k forks source link

fix/sfttrainer-compatibility #1293

Closed Erland366 closed 1 week ago

Erland366 commented 1 week ago

I missed this yesterday, because in trainer.py, TrainingArguments is actually SFTConfig. Therefore we will not move anything to SFTConfig afterwards for SFTTrainer patch

Erland366 commented 1 week ago

image

This will gives empty set because it's basically set(SFTConfig) - set(SFTConfig)