rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.21k stars 166 forks source link

OpenAI compatable API #39

Closed Stargate256 closed 2 weeks ago

Stargate256 commented 1 month ago

Hi, is it possible to connect to a local LLM server via the OpenAI compatable API.

Basically I would like to use oobabooga/text-generation-webui with the OpenAI API extension.

I can't use ollama because it is almost 3x slower than text-generation-webui as it can't run EXL2.

yhyu13 commented 1 month ago

@Stargate256

yes, in the .env file in the root folder, add

OPENAI_API_KEY=sk-11111111111111111111111111111111
OPENAI_BASE_URL=http://127.0.0.1:5000/v1

But you need models with function calling abilities in order to call search api, see https://github.com/rashadphz/farfalle/issues/15#issuecomment-2143314564

Stargate256 commented 1 month ago

What must I select when I run it then?

Stargate256 commented 1 month ago

I tried this, even added OPENAI_BASE_URL=${OPENAI_BASE_URL} to docker-compose.dev.yaml and it doesn't work, there are no errors, it just doesne't do anything.

LLM: phi3-medium-8_0bpw-exl2

Stargate256 commented 1 month ago

@yhyu13 Are you sure that you don't need to do anything else?

yhyu13 commented 3 weeks ago

Sorry, I missed it

in src/backend/related_queries.py, I added openai.base_url support in order to config openai url by .env file

 load_dotenv()

+openai.base_url = os.environ.get("OPENAI_BASE_URL", "https://api.openai.com")
+print(f"openai.base_url is {openai.base_url}")
+

 OLLAMA_HOST = os.environ.get("OLLAMA_HOST", "http://localhost:11434")
rashadphz commented 2 weeks ago

Just added support for this through https://www.litellm.ai/! You should be able to do this by setting OPENAI_BASE_URL in your .env. Let me know if you have any trouble setting this up.

anzestrela commented 1 week ago

And how do I make it use an OpenAI compatable APi? (sorry for the late follow up, my AI server had died.)