-
is it possible to use petals for inferring/prompt tuning without sharing my gpu?
-
![98DDB13F-60AE-4F7D-8979-9B287A2A4CC1](https://user-images.githubusercontent.com/39515647/233412075-f68a9c2b-24c8-426c-80d3-6f2c0e48b1ca.png)
-
I'm trying to run https://github.com/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb and it is failing on default settings with these exceptions:
```
Feb 08 10:24:01.…
-
Hi, when I ran ppo with bloomz-7b1-mt and bloom-560m (prompt_len = answer_len = 256) with zero stage 3 (8*A100-40G), it seems the generation time is too slow (average about 72s). When I setting zero s…
-
Thanks for your project. I have a few wishes: the most important thing is that the models cannot translate more than one sentence (after the dot it does not translate in most cases), the answers are c…
-
**LocalAI version:**
commit 3829aba869f8925dde7a1c9f280a4718dda3a18c/ docker 6102e12c4df1
**Environment, CPU architecture, OS, and Version:**
MacBook Air M2, Ventura 13.4
**Describe the …
-
作者大佬您好,感谢您的贡献和输出,因为我对RLHF的这部分比较陌生,所以想咨询您几个问题,希望得到您的指点:
1. 如果我底座模型是其他的模型,比如:Baichuan2,或者ChatGLM2,然后SFT的时候是自定义的训练数据,这种模式是可以使用你们的发布的RLHF的代码么
2. 如果1可以的话,那么意味着我需要重新训练RM,然后PPO,我想了解这种场景,你们当前的代码是否可以支持
3. 如…
-
re the notebook :✉️ MarketMail AI ✉️ Fine tuning BLOOMZ (Completed Version).ipynb
https://colab.research.google.com/drive/1ARmlaZZaKyAg6HTi57psFLPeh0hDRcPX?usp=sharing
i tried to modify the exa…
-
### Feature request
ggml is gaining traction (e.g. llama.cpp has 10k stars), and it would be great to extend optimum.exporters and enable the community to export PyTorch/Tensorflow transformers wei…
-
Hello! I'm currently running bloomz-petals on a Google Colab notebook in order to make use of the free GPU. However, I've recently started receiving the following error:
MissingBlocksError: No serv…