-
It would nice to be able to use ollama with local LLMs versus github co-pilot.
-
I tried to leverage the openai information to connect my open source model via vllm to label studio without any success
I only get unauthorized notification, is there any information how to by pass t…
-
The docs mention it but never give an example on how to run it using local inference. The only mention is the openai compatible api but that it doesnt support all of the functions.
-
### Service
OpenAI
### Describe the bug
Google blogged about how they now expose an OpenAI-compatible endpoint for gemini:
https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-ope…
-
### What happened?
Description:
I attempted to integrate [PortKey](https://portkey.ai/) with LiteLLM in two ways. Here’s a summary of the steps and outcomes:
**Attempt 1:** Configuring PortKey …
-
### Which API Provider are you using?
OpenAI Compatible
### Which Model are you using?
yi-lightning
### What happened?
Hello 👋 Thank you so much for your work! I can't use the Yi model correctly …
-
Ollama is OpenAI-compatible so it should work out of the box using the OpenAI provider (by overriding the base URL).
We should explicitly test and document it to ensure there are no unexpected diff…
-
### Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [x] I'm not able to find an [open issue]…
-
Can I use other LLMs? I connect to other remote models via their API, then locally rebroadcast a web server that bridge any OpenAI-compatible HTTP requests to the respective model.
I can see Lumos…
-
Hi, I'm encountering a length limit when using a third party model to extract local html. Can chunking support be added to XMLScraperGraph?
## code:
~~~
import logging
import os
from langchai…