-
ValueError: You are trying to return timestamps, but the generation config is not properly set. Make sure to initialize the generation config with the correct attributes that are needed such as `no_ti…
-
This error occurs when waiting for a reply, whether using the webdemo or the command line
-
### System Info
**Description**
I am experiencing an issue when using the transformers library version 4.36.1 with a custom model serving endpoint that utilizes mlflow. The model is based on the Res…
-
**Describe the bug**
Error: Could not load the stable-diffusion model! Reason: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks lik…
-
Traceback (most recent call last):
File "/home/sagemaker-user/CogVLM/basic_demo/cli_demo_sat.py", line 162, in
main()
File "/home/sagemaker-user/CogVLM/basic_demo/cli_demo_sat.py", line 37…
-
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.4.0-1134-aws-x86_64-with-glibc2.31
- Python version: 3.10.2
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- …
-
does not use docker-compose.
step1.Start the ray_head node
```
docker run -d \
--name ray_head \
--privileged \
--env MODEL_FOLDER=${MODEL_FOLDER} \
--env RAY_NUM_CPUS=8 \
-p 6379:…
-
Any help is appreciated, thank you!
Full Error
got prompt
[rgthree] Using rgthree's optimized recursive execution.
Special tokens have been added in the vocabulary, make sure the associated wo…
-
I run lightrag_hf_demo.py, but there is no response after running it. Does anyone know what is going on?
My code is as follows:
```
import os
from lightrag import LightRAG, QueryParam
from …
-
This is a "living issue". Editing is appreciated.
### Context:
- Most prominent benchmark for embedding models: https://huggingface.co/spaces/mteb/leaderboard
- We can choose to index the pdf dat…