-
It'd be great to get support for Google Gemini too
-
The cloudflared Tunnels are used to publish the ollama service port, and the client uses the enchanted-llm app for the dialogue, with no messages returned. But without cloudflared Tunnels, everything …
-
### What happened?
Tested to create a custom logging callback according to the docs [here](), and did a very simple just to try it out, which implements the following methods:
```py
def log_pre_api…
-
Aider version: 0.60.1
Python version: 3.10.6
Platform: Windows-10-10.0.19045-SP0
Python implementation: CPython
Virtual environment: No
OS: Windows 10 (64bit)
Git version: git version 2.39.1.win…
-
[cachetools](https://cachetools.readthedocs.io/en/latest/#)
-
LLMLite Azure OpenAI embedding configuration is missing the `output_vector_size` property. It makes the embeddings to fail as it is checked in RAGLite.
I am not sure if we should rely on it in any…
-
### What happened?
I try to add new model via BE api with endpont: /model/new and get bugs:
``` {'error': {'message': "Authentication Error, [Errno 30] Read-only file system: '/app/config/litellm-co…
-
### The Feature
Consider adding a boolean to the new model dialog with "Load all Models" with the help text of "If this field is toggled, we will call the API to retrieve all models from the endpoint…
-
**What problem or use case are you trying to solve?**
Currently, litellm cannot track cost when invoking a model from a LiteLLM proxy.
**Describe the UX of the solution you'd like**
**Do you …
-
### What happened?
Invalid `.get()` call in [here](https://github.com/BerriAI/litellm/blob/bac2ac2a49102c91dea475c42891f463b60744bd/litellm/integrations/opentelemetry.py#L508) causes:
```
OpenTel…