-
https://guanaco-model.github.io/
https://huggingface.co/datasets/JosephusCheung/GuanacoDataset
-
I was playing with fine-tuning notebook mentioned in README (`bnb-4bit-training.ipynb`) with just one change: Instead of fine-tuning specified `EleutherAI/gpt-neox-20b` model, I was doing the same for…
-
I'm trying to install Llama 2 13b chat hf, Llama 3 8B, and Llama 2 13B (FP16) on my Windows gaming rig locally that has dual RTX 4090 GPUs. I aim to access and run these models from the terminal offli…
-
I use the A10 which have 24GB GPU memory. I tried to inference guanaco-13b it have OOM issue.
Here are inference to load the model:
```
import torch
from peft import PeftModel
from tran…
-
I made fine-tuning on belle based on the llama with an expanded Chinese vocabulary, and the effect is polarized. Do you have any good suggestions?
My repo: https://github.com/27182812/ChatGLM-LLaMA-c…
-
**Describe the solution you'd like**
It would be interesting to have an example showing how to use downloadable models directly.
**Describe alternatives you've considered**
I've considered us…
-
loading base model /models/guanaco-33b-merged...
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████| 7/7 [01:12 None: │
╰─────────────────…
-
Current version:
commit 84156f179f91f519e48185414391d040112f2d34 (HEAD -> main, origin/main, origin/HEAD)
updated on Jun 3 2024
I tired to run the following script in example/scripts/stf.py:
…
-
看到 These are RWKV-4-Pile 1.5/3/7/14B models finetuned on Alpaca, CodeAlpaca, Guanaco, GPT4All, ShareGPT and more,
里面包含了Alpaca,是不是意味着不可以商用?
-
Hi,
I am trying to use "**TheBloke/WizardCoder-Guanaco-15B-V1.0-GGML**", however, I am getting following error:
```
GGML_ASSERT: /home/runner/work/ctransformers/ctransformers/models/ggml/ggml.c:410…