-
### Describe the feature
First of all, thank you for your great work!I'm curious about why not just calculating loss on target tokens. I read BLOOMZ's paper:
![image](https://user-images.githubuse…
-
Thank you for your great work!
I am interested in fine-tuning the mBLIP model for a low-resource language that currently has unsatisfactory performance in tasks such as image captioning. However, I …
-
It would be great if we had a LLMs wrapper for Forefront AI API. They have a selection of open source LLMs accessible, such as GPT-J and GPT-NeoX.
-
在第二步,/xxx/bloomz-7b1-mt", # 这个是重点,下载模型后改成自己的原始模型路径。
如何下载这个原始模型?是bin file吗
-
Hi, I am trying to evaluate my model on the BBH with/without CoT but all task results end up being 0.0. I am quite unexperienced, so please have it in mind when helping me out. Other tasks I've tried …
-
### System Info
Hello Team,
I am following the https://huggingface.co/docs/transformers/tasks/summarization tutorial for summarization. We do have TGI server and wanted to check if we can use TGI se…
-
I'm using MacOS and got everything installed. Now I'm trying to run the code recommended in the docs:
```
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM…
-
Hi,
Thanks a lot for the excellent work.
Could you share with us a pretrained weights for Bloom and BloomZ (4-bits) ?
-
hello, I evaluate the bloomz after finetune it by lora(https://github.com/tloen/alpaca-lora), the command is :
"""
python main.py \
--model hf-causal-experimental \
--model_args pretrained…
-
**Is your feature request related to a problem? Please describe.**
No support for OpenCL.
**Describe the solution you'd like**
Implementation of CLBlast to provide support for OpenCL.
**Descri…