-
Add support for someone to be able to use a Locally hosted LLM.
- The LocalLLM must have an API wrapper like GPT4ALL
- This will not allow an arbitrary LLM running inferences to just interact with…
-
I found your repo digging through github
https://github.com/microsoft/guidance/issues/328
I'll have a play with this repo soon - but thought I'd share the above localllm with guidance was great re…
-
After several re-installs, trying out different models, no matter what I do, my system can't seem to use local LLMs. It crashes every time and gives me the same message each time. Super aggravating si…
-
### Details
Is it possible to use custom LLM like https://github.com/EvanZhouDev/bard-ai with Llamaindex?
-
Auth error trying to make initial call to openai. First time tryiing to use openai for LLM backend. Appears api key is not being sent.
I used a breakpoint before line 191 in openai_tools.py (right …
-
**Describe the bug**
Basically when
**To Reproduce**
Steps to reproduce the behavior:
1. Open LMStudio
2. Download model
3. Visit fourth menu item (local interface server)
4. Select model on…
-
I wonder if you can install anything llm with LM Studio and use your LLM instead of ChatGPT? Thanks in advance!
-
### Contact Details
_No response_
### What happened?
I have Ollama/openchat running behind the OpenAPI compatible frontend of LiteLLM.
The chat completion never "finishes" when the bot is respon…
-
**Describe the feature you'd like**
oobabooga Text-generation UI has a plugin which emulates the openAI API. But in the UI of Flowise it is not possible to set the URL of OpenAI.
Setting the OPENAI…
-
Hi,
Basically title. THe intro suggests that openai-access can be replaced with locally running models (maybe with oobabooga-openai-api?) Anyway, can't seem to find instructions / env settings for …