-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [Yes] I am running the latest code. Development is very rapid so there are no tagged versions as …
-
### Describe the bug
Okay, so I had been using this version of Oobabooga: 5447e751913b49f32555c59f69c93a2c5e8a77ff -Downloaded and installed on July19, pre LLAMA2 updates.
Using this version I cou…
-
Im trying to run autotrain on my Win11 with RTX 4090 and having issues.
I have created a new python 3.11.5 conda env, installed autotrain-advanced, run autotrain setup --update-torch and installe…
-
In this comment [here](https://github.com/huggingface/accelerate/issues/1239#issuecomment-1494331155) someone provided an example script for standard multi-node training with SLURM.
After making a…
-
-
# 🐛 Bug
## Information
Model I am using: bert-base-cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts…
-
This is not a issue, just reporting that it works great with Guanaco-65B-GPTQ-4bit.act-order.safetensors from TheBloke using 2x3090. Speed is great, about 15t/s.
-
Following from discussions in the Llama 2 70B PR: https://github.com/ggerganov/llama.cpp/pull/2276 :
Since that PR, converting Llama 2 70B models from Meta's original PTH format files works great.
…
-
Where is the 'merge.json' file, or is the file I need to create it?
-
plzzzzzz