-
I tried to merge adapter_model.safetensors and unsloth.Q8_0.gguf using your tool. Both were taken from here: https://huggingface.co/klei1/bleta-8b Got this error:
![image](https://github.com/user-…
-
When FA2 is enabled ("FA2=True" shows up when tuning),
"Unsloth 2024.8: Fast Llama patching. Transformers = 4.44.2.
\\ /| GPU: NVIDIA GeForce RTX 4090. Max memory: 23.617 GB. Platform = Li…
-
I am now trying to finetune a llama3 model. I am using unsloth,
`from unsloth import FastLanguageModel`
Then I load Llama3 model.
```
model, tokenizer = FastLanguageModel.from_pretrained(
…
-
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/gemma-2-9b-bnb-4bit",
max_seq_length = max_seq_length,
dtype = None,
load_in_4bit = True)
I noticed models ar…
-
unsloth版本:2024.8
llamafactory :0.8.4.dev0
模型:DeepSeek-Coder-1.3B, Gemma-2-2B
transformers : 4.43.4
xformers: 0.0.27.post2
trl: 0.9.6
torch: 2.4.0
cuda: 12.1
系统:Ubuntu 22.04
显卡: NVIDIA A6000…
-
![unslot metadata issue](https://github.com/unslothai/unsloth/assets/5505550/96dbeb31-527a-4163-a26c-d6310270b083)
when trying to install with the command : pip install "unsloth[colab-new] @ git+h…
-
Right now, when we finetune a LoRA on top of e.g. Llama 3.1 8B instruct, even if model_name is `meta-llama/Meta-Llama-3.1-8B-Instruct`, it gets resolved to `unsloth/meta-llama-3.1-8b-instruct-bnb-4bit…
-
Hi. Raising this issue as I am experimenting a much slower inference time with Gemma-1 models.
> Environment:
> - xformers 0.0.26.post1 pypi_0 pypi
> - unsloth …
-
RuntimeError: Unsloth: `unsloth/llama-3-8b-bnb-4bit` is not a full model or a PEFT model.
-
Hello, I get the error in the title when finetuning Phi3.5.
I believe I'm on the latest unsloth (installed from git wit pip).
Context: finetuning Phi3.5 with code that already works with other u…
beniz updated
1 month ago