-
### System Info
python 3.10
Langchain 0.1.4
mistral tgi hosted on ec2
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifi…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
In the document, I use "docker run ghcr.io/huggingface/text-generation-inference:latest" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't fi…
-
## Bug report
### Describe the bug
Why I toggle mpd playback twice in a short time, a second of the song is skipped. This can be most easily reproduced when running the following commands when mpd…
-
## Bug report
### Describe the bug
I'm using a HifiBerry DAC2 HD on a Raspberry Pi 3B+ running Raspberry Pi OS.
```
uname -a
Linux music 5.15.32-v7+ #1538 SMP Thu Mar 31 19:38:48 BST 2022 a…
-
### System Info
Hi,
I tried building the image using:
```
docker build -t tgi-gaudi .
```
Getting this error:
```
Step 10/36 : RUN cargo chef prepare --recipe-path recipe.json
---> Runn…
-
Looks like TGI now have support for Open AI API.
To test this we need to change the `docker-compose-gpu.yml` to something like the one below.
```yml
services:
llm-api:
image: ghcr.io/…
-
### System Info
SageMaker DLC: `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.0.1-tgi1.1.0-gpu-py39-cu118-ubuntu20.04`
### Information
- [X] Docker
- [ ] The CLI …
-
I use benchmark_serving as client, api_server for vllm, text_generation_server for TGI, the client cmd is listed below:
" python benchmark_serving.py --backend tgi/vllm --tokenizer /data/llama --data…
-
```
(text-generation-inference) root@C.10294313:~/tgi_test/text-generation-inference$ text-generation-launcher
2024-04-29T11:11:11.331114Z INFO text_generation_launcher: Args { model_id: "bigscie…