-
For each of the models used, is it possible to specify their own LLM provider?
This is especially helpful with locally hosted models where the "smart" LLM can be hosted on a different server (more …
-
Hi, somehow I get no traces if using Langchain' Ollama, but all seems fine with raw Ollama package:
```
from ollama import Client
import openlit
openlit.init( otlp_endpoint="http://127.0.0.…
-
### Describe the problem
I would like to integrate self-hosted LLMs which are proxied by [litellm](https://www.litellm.ai/) which provides a API in OpenAI format.
The API endpoint URL is not con…
-
As [mentioned in Discord](https://discord.com/channels/1239284677165056021/1239289240756551781/1240604191047549029), Pipecat bots don't automatically maintain an LLM context object for you; instead, y…
-
hey! stoked for the Open AI addition but would also love to see local LLMs as well through LM Studio and Ollama.
-
-
While using openai.WithResponseFormat with the response format set to JSON, i could see the req.ResponseFormat as nil coz of improper handling in code
Upon inspecting the code, I noticed that ther…
-
Enable multi-turn prompts for the supported LLMs like Llama3, Mistral similar to https://github.com/Lightning-AI/litgpt/pull/1487
We should be able to do the following with other supported models:
…
-
ChatGPT is broadly sycophantic, and often hedges its answer far more than is necessary. Given our use case isn’t evil, it might be worthwhile to reach out to other organisations (Anthropic?) asking fo…
-
Hi Maarten,
What do you think if we enable BERTopic to create representation by also using Watsonx hosted LLMs like Llama-3-70b, Mixtral-8x7b, Granite and many others to come?
Let me know your t…