-
Add integration for [TGI](https://github.com/huggingface/text-generation-inference) LLM provider.
-
I get this error:
```
chat_template, stop_word, yes_map_eos_token, ollama_modelfile = CHAT_TEMPLATES[chat_template]
~~~~~~~~~…
-
### System Info
OS: Windows 11
Rust version: cargo 1.75.0 (1d8b05cdd 2023-11-20)
Hardware: CPU AMD 6800HS
(text-generation-launcher --env didn't work)
### Information
- [ ] Docker
- [X] The CL…
-
### Your current environment
Referring to the issue #5181 "The Offline Inference Embedding Example Fails", the method LLM.encode() can only work for embedding models. Is there any idea to get the ou…
-
Currently, TGI does not support FP8. We have raised [issue](https://github.com/huggingface/text-generation-inference/issues/2654) about it.
-
After followed the instructions in README and sucessfully run the main_chat.py, I can't get any SMPL parameters or images.
The results are listed below:
> (chatpose) zbw@node01:~/PoseGPT$ python m…
-
python interleaved_generation.py -i 'Please introduce the city of Gyumri with pictures.'
VQModel loaded from /workspace/Anole-7b-v0.1/tokenizer/vqgan.ckpt
Model path: /workspace/Anole-7b-v0.1/mo…
-
### 🚀 The feature, motivation and pitch
Contrastive Decoding (Li et al., 2022) is a decoding strategy that contrasts the log probabilities of two or more models at each token to shift the token dis…
-
hi,
Do you plan to add https://github.com/jy0205/Pyramid-Flow ?
Thanks
-
### System Info
pandasai==2.2.14
Python 3.10.12
### 🐛 Describe the bug
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
model_id = "hugging-quants/Meta…