-
The API is under heavy load right now. I'm trying to work on some Llama models.
However, it seems the API is doing some OpenAI requests in the background? At least I'm getting a 60 second OpenAI ti…
-
Hi. I wanted to try out gptel after watching your excellent video presentation, but I don't have an OpenAI account, and local LLMs are slow (in my extremely newbie opinion).
So I signed up for a T…
-
### Description
TogetherAI models:
- [x] Yi-34B-Chat
- [x] Qwen-14B-Chat
- [x] WizardLM-13b-v1.2
- [x] Mistral-7b-Instruct-v0.2
- [x] deepseek-coder-33b-instruct
- [x] Vicuna v1.5 (13B)
- [x…
-
Hello everyone. I recognize significant potential in this project; excellent job!
Regrettably, I have been unable to configure TogetherAI to work with the LiteLLM proxy, even in the simplest setup.…
-
In all my connections to meta/llama-2-70b-chat I get the following error:
['OpenAI API error: A timeout error occurred. The function call took longer than 60 second(s).. If you have a team budget,…
-
### The Feature
Benchmark the performance of litellm against the python sdk's for provider libraries. Test for high-traffic scenarios (e.g. 100k requests / min.) and monkey-patch the actual request b…
-
Add support for Together models; for instance, the new Nous Mixtral is released there first.
-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
### Feature description
litellm provides easy way to Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc. So if we have generalized format users …
-
- [x] I have read and agree to the [contributing guidelines](https://github.com/griptape-ai/griptape#contributing).
Hello, I'm trying to connect a locally hosted LLM to to a prompt engine, we are …