-
while python3 -m fastchat.serve.cli --model-path /model/path
it errors
ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of Ba…
-
What is the argument and command line for running the fine tuning code of Flan T5?
-
I was trying to fine tune the FastChat-T5 model using SkyPilot and got this error:
(huggingface, pid=25599) wandb: ERROR api_key not configured (no-tty). call wandb.login(key=[your_api_key])
(hugg…
-
### System Info
**The problem seem to be in below code:**
exception: dict is not iterable
Working version: langchain==0.0.164
usecase : https://python.langchain.com/en/latest/modules/chains…
-
Vicuna tokenizer has no extra '\n' characters. T5 tokenizer inserts them after each space.
Reproduce:
```python
from transformers import (T5TokenizerFast, T5ForConditionalGeneration, AutoTo…
-
I love the T5 model.
https://github.com/lm-sys/FastChat/blob/a26db3c814889035d92c8ae80d6defbd7381ee55/fastchat/train/train_flant5.py#LL170C12-L170C12
It seems to use `### USER:` but I thought mo…
-
Hi, can we train GPT4ALL-J, StableLm models and Falcon-40B-Instruct with the current llm studio?
-->
Wouldn't be so nice 🙂
### Motivation:-=> community 😎✊
-
as title
when i want to run in gpu, i got the error.
-
Is NeMo the best way to run LLM on your hardware for conversation?
My second experience was that on 4090 the https://huggingface.co/nvidia/GPT-2B-001 did not work #6564
I want to play around with …
-
Is the api inference available for t5.
Any example available? Also what's the format of data.