-
When trying to forward encrypted image using echoClient, we get the following error:
Message com.whatsapp.proto.Message is missing required fields: image_message.media_key
Full log:
> DEBUG:yowsup.l…
ghost updated
7 years ago
-
# Environment
Runtime environment:
- Target: x86_64-unknown-linux-gnu
- Cargo version: 1.75.0
- Commit sha: c38a7d7ddd9c612e368adec1ef94583be602fc7e
- Docker label: sha-6c4496a
Kubernetes Clus…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
For visibility cc'ing: @sayakpaul
Tech stack
- Gemma 2B/7B models
- Hugging Face(HF) transformers for modeling
- HF peft for parameter efficient fine-tuning
- HF text-generation-i…
-
While browsing through the code, I discovered, that the generated code could be much more efficient, if the stack orientation would be turned around.
As the 6502 is able to access memory easily with …
-
## Describe the bug
Prompt Tuning model generates low-quality output
## Platform
Please provide details about the environment you are using, including the following:
- Interpreter version:…
-
### System Info
```
2024-04-22T02:18:37.204227Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: 2d0a7173d4891e7cd5f9b77f8…
-
This library is great, I've been testing phi-3-mini-128k, and this by far the fastest runtime for it. For a non-onnx model, id use [TGI](https://github.com/huggingface/text-generation-inference) but p…
-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
### Feature request
Can we have the TGI running on a cluster of multiple nodes?
### Motivation
Sometimes, it is not possible to have all GPUs running on a single machine due to powers etc, it is im…