-
### Describe the bug
batched diffusers pipeline inference is really slow
### Reproduction
```
class prompt_data(Dataset):
def __init__(self,prompt_file):
self.txt_pr…
-
When I am using the pipeline, I meet an error: KeyError: "Unknown task depth-estimation, available tasks are ['audio-classification', 'automatic-speech-recognition', 'conversational', 'feature-extrac…
-
### System Info
OS: Debian 6.1.85-1
NVIDIA-SMI 550.54.15
Driver Version: 550.54.15
CUDA Version: 12.4
Card: NVIDIA RTX A6000
### Information
- [X] Docker
- [ ] The CLI direc…
-
python interleaved_generation.py -i 'Please introduce the city of Gyumri with pictures.'
VQModel loaded from /workspace/Anole-7b-v0.1/tokenizer/vqgan.ckpt
Model path: /workspace/Anole-7b-v0.1/mo…
-
thanks for your perfect works , but i just see label condition , how do i use text contidion
-
When run `optimum-cli export openvino --trust-remote-code --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0` locally, it reports like:
```bash
Traceback (most recent call last):
…
-
### System Info
TGI version 2.1.1
```
tgi-llava-1 | 2024-07-05T20:49:53.276458Z INFO text_generation_launcher: Runtime environment:
tgi-llava-1 | Target: x86_64-unknown-linux-gnu
tgi-llava-1…
-
I've been testing running various finetuned versions of supported models on GKE. However, it gets stuck on ` Using the Hugging Face API to retrieve tokenizer config`
This are the full logs
```…
-
01:54:47-255686 INFO Starting Text generation web UI
01:54:47-260684 WARNING trust_remote_code is enabled. This is dangerous.
01:54:47-268684 INFO Loading the extension "openai"
01:54:47-4…
-
How can these model help in image generation with the text inside_ ? or is there any advise to use any additional model? If someone has its experience?