-
### 🐛 Describe the bug
INFO colossalai - colossalai - INFO: Tokenizing inputs... This may take some time...
Episode [1/100]: 0%| …
-
Hi guys,
First of all, great video and funny project !
There are 3 ideas that could be nice to implement:
1) Model selector, to be able to easily download and switch between models.
2) Bloom Pet…
-
Add more examples for different models like Bloomz, Dolly, GPT etc.
-
### Branch/Tag/Commit
main
### Docker Image Version
nvcr.io/nvidia/pytorch:22.09-py3
### GPU name
V100-32G
### CUDA Driver
11.0
### Reproduced Steps
steps 1: pull images w…
hurun updated
11 months ago
-
I use this repo to finetune bloomz-7b1-mt with alpaca data (50k conversation) and the results are terrible. It takes 8 hours to train with the same arguments as in how you finetune the llama. What cou…
-
Hey hey,
We are working hard to help you unlock the truest potential of open-source LLMs. In order for us to build better and cater to the majority of hardware we need your help to run benchmarks w…
-
[bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt)进行微调),不知道参考具体哪个github资源,麻烦能提供一下吗?
-
Hello,
What lines might one change to use model.generate of a local model on the same host?
I have a 16GB VRAM gaming GPU and have run local inference on bloomz-7B, RWKV 14B, Pythia 12B.
I wa…
-
I'm really interested in making a squad_V2 gpt-neo model (and bloomz if possible)
Wrote this looking for help
https://discuss.huggingface.co/t/gpt-neo-125m-squad-model/28012
https://www.reddit.…
-
I installed everything step by step.
I tried an seperated Container but same there.
I get following message when running the autogpt4all.py or .sh
```bash
root@d2c36eb3a44c:/home/autogpt4all# …