-
I've trained Gemma 2B in 16bit with LoRA. With adapters loaded separately everything works just fine. But after merging the adapters, the model becomes literally unusable.
![image](https://github.c…
-
I am testing TorchTune with some settings I trained my models on. My go-to single-device library was unsloth, as it provides great memory and time savings.
Based on my Llama 3 8B comparisons, the …
-
使用的是秋叶整合包,直接搜索报错说让将torch回退到21coda118,我也不是很懂,直接在整合包里回退了,
![image](https://github.com/user-attachments/assets/027ed617-d648-4cf0-88e2-ebe50c45498a)
之前还有别的状况,我在custom_nodes下按照主页里的方法用指令下载依赖后出现闪退,我按照其他教程…
-
### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.3
…
-
When trying to load peft models from a specific revision, unsloth attempts to load the base model with that revision. This leads to the misleading error:
`RuntimeError: Unsloth: `chreh/tmp_model` i…
-
Hi,
I was looking to run `simple_evaluate` with my own `transformers` model and found #521 / #601, but it looks like they're only merged to `master`. Was this feature lost in the 0.4.0 migration? S…
-
Hi @danielhanchen ,
Tried to save GGUF model but got error for following codeblock.
# Save to 8bit Q8_0
if True: model.save_pretrained_gguf("model", tokenizer,)
Following error is thrown
[/us…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
Unsloth supports Dora with the new update. I recommend that you remove the code in your software that pr…
-
### What kind of request is this?
None
### What is your request or suggestion?
https://github.com/unslothai/unsloth/releases/tag/May-2024
`2e1cb3888b2b6c9ea3bca56e808d0604b715f23a`
### …
-
prerequisites
```bash
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip instal…
Jiar updated
4 months ago