unslothai / unsloth

Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
12.18k stars 791 forks source link

Support for gemma2 #704

Open x1250 opened 4 days ago

x1250 commented 4 days ago

Hi guys, first of all many thanks for this project. This has allowed me to finetune some models, which I have not been able to do using other alternatives. Now I wanted to finetune gemma2, which seems amazing, but cannot, because it is not yet supported.

🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. [2024-06-29 03:11:02,545] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 [WARNING] using untested triton version (2.3.1), only 1.0.0 is known to be compatible Traceback (most recent call last): File "/home/daniel/Python/Langchain/Training/ClassifierOptica/TrainUnsloth.py", line 34, in model, tokenizer = FastLanguageModel.from_pretrained( File "/home/daniel/Python/Langchain/.venv/lib/python3.10/site-packages/unsloth/models/loader.py", line 127, in from_pretrained raise NotImplementedError( NotImplementedError: Unsloth: google/gemma-2-9b-it not supported yet! Make an issue to https://github.com/unslothai/unsloth!

SantoshGuptaML commented 2 days ago

same issue with 27b

danielhanchen commented 2 days ago

Yep working on it! Apologies on the delay! Relocated to SF, so my bro and I are still in the unpacking phase!!