rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.76k stars 245 forks source link

Fix the incompatibility of ollama and groq json's response and update default model selection #87

Closed init0xyz closed 2 months ago

init0xyz commented 3 months ago

The changes included in this commit:

The incompatibility of Ollama and Groq JSON's response

This problem has been mentioned in issue 1, issue 2, issue 3. It's mainly caused by the function calling compatibility required by instructor JSON's response, which seems not to work very well in Groq's llama and Ollama's model(looks like a bug of litellm) when using expert search and generates related queries. To solve this problem, it's suggested that JSON mode rather than tools be used in the instructor response model when the model is Groq and Ollama. Based on my tests, this change will stabilize Groq and Ollama’s structured generation.

Update the default model selection

  1. change the default "fast" model from GPT-3.5-turbo to GPT-4o-mini, using a more powerful model without increasing cost.
  2. update the Groq model from llama3-70b to llama3.1-70b, also changed the Ollama llama3 to llama3.1.

Third-party Openai-proxy server

add the support for third-party Openai-proxy server by including the "OPENAI_API_BASE" env variable in the docker-compose file.

vercel[bot] commented 3 months ago

@init0xyz is attempting to deploy a commit to the rashadphil's projects Team on Vercel.

A member of the Team first needs to authorize it.

rashadphz commented 2 months ago

thanks!!