rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.63k stars 234 forks source link

Request: Add OPENAI_API_URL #1

Closed azdolinski closed 3 months ago

azdolinski commented 4 months ago

1) Please consider adding the environment variable OPENAI_API_URL. This addition will facilitate communication with LiteLLM, which adheres to the OpenAI API protocol and acts as a local proxy. Through this configuration, you'll also gain the capability to connect to Ollama, enabling local LLM interactions.

2) LiteLLM can be deploy as container... and support API method /models [/v1/models] so you can also add read list of models...

curl -X 'GET' \
  'http://localhost:4000/v1/models' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer sk-XXXXXXXXXXX'

Example reponse:

{
  "data": [
    {
      "id": "together_ai-CodeLlama-34b-Instruct",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-turbo",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-haiku-20240307",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-1.5-pro-preview-0409",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "perplexity-mistral-7b-instruct",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "huggingface-zephyr-beta",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "together_ai-CodeLlama-34b-Python-completion",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-whisper",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-3.5-turbo-16k",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "ollama-mistral-7b",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-pro",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-llama3-70b-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-opus-20240229",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-vision-preview",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-32k",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-llama3-8b-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "huggingface-Xwin-Math-70B-V1.0",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4o",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-mixtral-8x7b-32768",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-1.5-flash-preview-0514",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-3.5-turbo",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-sonnet-20240229",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "ollama-mxbai-embed-large",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-gemma-7b-it-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "perplexity-mixtral-8x22b",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    }
  ],
  "object": "list"
}

Benefits:

Thank you for considering this enhancement.

rashadphz commented 4 months ago

hey! I just added support for local models through ollama. This doesn't support all of the models you listed, but let me know if this is helpful

azdolinski commented 4 months ago

@rashadphz My request is not directly releted to local ollama..

LiteLLM hosted locally in docker, allow you connect to multiple vendors (including ollama). It open for you support more then 100+ LLMs, load balance, cost tracking etc. https://docs.litellm.ai/docs/providers

[[farfalle]] ---(OpenAI API)--->[[LiteLLM (docker)]] ---(vendor API)----> [[OpenAI/Gemini/Groq/(ollama) etc..]]

For me - it allow in easy way manage (route logic) connections to multiple ai-clouds/vendors (and keep all api-key in one place).

rashadphz commented 4 months ago

Got it, just added this to the roadmap.

getofferhelp commented 4 months ago

@rashadphz Great Job! Would you like to add deepseek to the code? I did not find the code how to change api.openai.com to api.deepseek.com T T

If you would like to add this one to the repo, that would be great, or would you like to change the code so that we could change the url of openai locally?

The V2 of deepseek is super cool (In my Chinese literature and logic test, deepseek performs better than gpt-4 and llama3 70b), it's doc is like this:

The DeepSeek API utilizes an API format compatible with OpenAI. By modifying the configuration, you can use the OpenAI SDK to access the DeepSeek API, or employ software that is compatible with the OpenAI API.

Parameters         Values

base_url * https://api.deepseek.com/ model deepseek-chat

For compatibility with OpenAI, you may also set the base_url to https://api.deepseek.com/v1 to use.

getofferhelp commented 4 months ago

@rashadphz Great Job! Would you like to add deepseek to the code? I did not find the code how to change api.openai.com to api.deepseek.com T T

If you would like to add this one to the repo, that would be great, or would you like to change the code so that we could change the url of openai locally?

The V2 of deepseek is super cool (In my Chinese literature and logic test, deepseek performs better than gpt-4 and llama3 70b), it's doc is like this:

The DeepSeek API utilizes an API format compatible with OpenAI. By modifying the configuration, you can use the OpenAI SDK to access the DeepSeek API, or employ software that is compatible with the OpenAI API.

Parameters         Values

base_url * https://api.deepseek.com/ model deepseek-chat

For compatibility with OpenAI, you may also set the base_url to https://api.deepseek.com/v1 to use.

By the way, the hf page of deepseek v2 is here:

https://huggingface.co/deepseek-ai/DeepSeek-V2

chunzha1 commented 4 months ago

@getofferhelp @azdolinski I found that you can refer to this link to modify line 50 in chat.py within the backend, something like this: return llm = OpenAILike(model="my model", api_base="https://hostname.com/v1", api_key="fake"). Additionally, you need to add OpenAILike to the dependencies. It should be possible to modify the api_base. I plan to try it out first to see if it works.

Link: https://docs.llamaindex.ai/en/latest/api_reference/llms/openai_like/

getofferhelp commented 4 months ago

@getofferhelp @azdolinski I found that you can refer to this link to modify line 50 in chat.py within the backend, something like this: return llm = OpenAILike(model="my model", api_base="https://hostname.com/v1", api_key="fake"). Additionally, you need to add OpenAILike to the dependencies. It should be possible to modify the api_base. I plan to try it out first to see if it works.

Link: https://docs.llamaindex.ai/en/latest/api_reference/llms/openai_like/

Great! Thank you so much! I would try it too.

ACTS-HORIZON commented 4 months ago

@getofferhelp @azdolinski I found that you can refer to this link to modify line 50 in chat.py within the backend, something like this: return llm = OpenAILike(model="my model", api_base="https://hostname.com/v1", api_key="fake"). Additionally, you need to add OpenAILike to the dependencies. It should be possible to modify the api_base. I plan to try it out first to see if it works.

Link: https://docs.llamaindex.ai/en/latest/api_reference/llms/openai_like/

I was able to get it working. You don't need OpenAILike, but there are 2 things you need to change.

Line 50 in chat.py needs to be return OpenAI(api_base="https://hostname.com/v1", model=model_mappings[model]) and Line 24 in related_queries.py needs to be openai.AsyncOpenAI( base_url="https://hostname.com/v1", api_key="fake", ),

From here, you are able to change the name of the model in constants.py, and it should work.

Stargate256 commented 4 months ago

I am having the same problem and changed the lines you suggested and now I get: 500: Error code: 401 - {'error': {'message': 'Incorrect API key provided: fake. You can find your API key at https://platform.openai.com/account/api-keys.','type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Which is wierd, as my AI server doesn't need an API key.

rashadphz commented 3 months ago

Just added support for all LiteLLM models!

Jiayou-Chao commented 3 weeks ago

Just added support for all LiteLLM models!

Can you offer some documentation or examples of how to set the config file for LiteLLM models?