ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
MIT License
13.07k stars 1.22k forks source link

Generic endpoint support for OpenAI API compatible providers #16

Closed stelterlab closed 4 months ago

stelterlab commented 4 months ago

Is your feature request related to a problem? Please describe. I would like to see generic support for OpenAI API compatible providers as there are already quite a lot of llm server backends such as vLLM, TGI, NVIDIA Trition and providers like Mistral AI who "speak" Open AI API.

Describe the solution you'd like I would like to specify a CHAT_MODEL_PROVIDER like custom-openai and then only specify:

Describe alternatives you've considered You could take a look on LiteLLM which does support a whole bunch of endpoints.

https://docs.litellm.ai/docs/providers

Additional context Looking forward to the further development of this project. ;-)

ItzCrazyKns commented 4 months ago

Hi @stelterlab, thanks for the suggestion. It's great to see interest in expanding support for different OpenAI API providers. I'll make this feature a priority and work on adding it soon. Thanks for pointing out LiteLLM and others as examples.

83521166 commented 4 months ago

Hi @stelterlab, thanks for the suggestion. It's great to see interest in expanding support for different OpenAI API providers. I'll make this feature a priority and work on adding it soon. Thanks for pointing out LiteLLM and others as examples.

Thank you for your efforts. I would like to remind you about the OpenAI Embedding API mixed with Chat API and hope you will consider how to elegantly support local embedding models.

ItzCrazyKns commented 4 months ago

Hi @stelterlab, thanks for the suggestion. It's great to see interest in expanding support for different OpenAI API providers. I'll make this feature a priority and work on adding it soon. Thanks for pointing out LiteLLM and others as examples.

Thank you for your efforts. I would like to remind you about the OpenAI Embedding API mixed with Chat API and hope you will consider how to elegantly support local embedding models.

Definitely, I am working on support for local embedding models, then we can support most of the LLM providers: https://github.com/ItzCrazyKns/Perplexica/issues/41

maxin9966 commented 4 months ago

@ItzCrazyKns The problem can be resolved in 90% of cases by simply being compatible with the OpenAI API, opening the base_url, and allowing for a customizable model_name. This is because many people use services deployed by themselves, and with an approach similar to OneAPI, they can integrate their own services while maintaining compatibility with the OpenAI API.

ItzCrazyKns commented 4 months ago

@ItzCrazyKns The problem can be resolved in 90% of cases by simply being compatible with the OpenAI API, opening the base_url, and allowing for a customizable model_name. This is because many people use services deployed by themselves, and with an approach similar to OneAPI, they can integrate their own services while maintaining compatibility with the OpenAI API.

I have no issues in adding that, I am just talking about the embedding models. Even if I add it then there will be a lot of issues saying embedding models don't work as most of the providers support OpenAI format for chat completions API, Groq also doesn't supports embedding models. That's why I am first adding support for local embedding models before adding support for custom OpenAI endpoints.

xbl916 commented 4 months ago

@ItzCrazyKns The problem can be resolved in 90% of cases by simply being compatible with the OpenAI API, opening the base_url, and allowing for a customizable model_name. This is because many people use services deployed by themselves, and with an approach similar to OneAPI, they can integrate their own services while maintaining compatibility with the OpenAI API.

I have no issues in adding that, I am just talking about the embedding models. Even if I add it then there will be a lot of issues saying embedding models don't work as most of the providers support OpenAI format for chat completions API, Groq also doesn't supports embedding models. That's why I am first adding support for local embedding models before adding support for custom OpenAI endpoints.

Could you please take a look at this project: https://github.com/songquanpeng/one-api? It enables the transformation of locally deployed models into OpenAI format, encompassing embedding models and speech-to-text models. I've been utilizing this project for proxying local models in my other AI application-related projects, which only requires customization of the openai_base_url. Hence, I would like to request the addition of a customizable openai_base_url option in this project. Thank you.

ItzCrazyKns commented 4 months ago

This feature has been added into the main branch and is released with: https://github.com/ItzCrazyKns/Perplexica/releases/tag/v1.3.0