Closed PpiNeaPpLe closed 5 months ago
If you are using LocalAI, the endpoint for inferencing with LocalAI API requires /v1
because it uses the OpenAI format to communicate - which is what AnythingLLM uses. Is there an API endpoint you can use on Gradio to generate a response? Because if using LocalAI this would still require the /v1
since that is just how LocalAI works for its API.
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
usually when I run program that depends on Local AI i can simply put in my gradio.live link which comes from my google colab notebook which is hosting mistral 7b v.2 and it will act as if I am running it locally. However I get an error that my link must contain a "/v1" I have never experienced anything like this before on other applications. This is my first time using anything LLM and I am trying to avoid having to run mistral 7b on my PC locally and would prefer to run it through my google colab.
Are there known steps to reproduce?
No response