FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
30.35k stars 15.69k forks source link

[FEATURE] Please add support for Litellm to this project. Will be beneficial for both flowise team and devs :-)) #1863

Closed Greatz08 closed 6 months ago

Greatz08 commented 7 months ago

Please add support for litellm because it will help us in using 100+ LLM's easily. We could call all different LLM APIs using the OpenAI format so it will reduce our burden alot because we wouldnt have to write different code with different parameters to call and use api, we could simply write code in openai format simply and use different LLM's too.I did see issue #868 which was related to this request but till now no progress is been made to implement this awesome project litellm in flowise so i request you to think of a way to integrate with this project. We just need three main parameters like one for local proxy server url which we will run by ourself in our system like - litellm --model groq/llama2-70b-4096 and it will start running the proxy server which is completely compatible with openai api format and it will give random local url like this http://0.0.0.0:4009 and whenever we run for example python program -

import openai # openai v1.0.0+ client = openai.OpenAI(api_key="gsk_uiQgaOeMlvttRSfGb5ZxWGdyb3FYQ6PHTrOmNM3VsdxkU7n",base_url="http://0.0.0.0:40043") # set proxy to base_url response = client.chat.completions.create(model="groq/llama2-70b-4096", messages = [ { "role": "user", "content": "this is a test request, write a short poem" } ])

print(response)

First i run this command litellm --model groq/llama2-70b-4096 to run local proxy server and got "http://0.0.0.0:40043" this as my local openai api compatible proxy server, Notice that i am using openai library and format but the model i am using is of groq plus i am using groq api key not openai key(We need to set/export env variable of groqapi with same api key too like export GROQ_API_KEY="apikey") and then when we run the python program we will get back proper response simply. We dont have to go deep in different llm documentations and we can simply just run and 1.change url to our local proxy server 2.set api key simply of the llm we are using 3. Have to change model name and thats all. I have tested and got correct output and you can test too simply if you want, With this example i hope if there was any confusion in this project usecase is cleared and you can see how simply and beneficial/time saving it is and similarly for other LLM's too we can do. Now i Hope to see it's implementation soon because in that previous mentioned issue there was no further response or communication so i am highlighting this again.For this whole project this will be very much beneficial as you all dont have to do create soo many options/componets for other llms and people can simply use this project to connect with llms and conversate in flowise easily. If possible add this to upcoming update :-)) Thankyou

Greatz08 commented 7 months ago

I would like to add one more thing that if team is facing issue regarding its implementation then they can refer to langflow which is another flowise like project which has chatlitellm component which can i guess accomplish task but tbh there is no option to put my own local proxy server url so i changed some things in there code option but couldnt get any response so idk what is happening but i am unable to run many things in it and in flowise everything works out of box so really impressed and would like to see this implemented in easy way

HenryHengZJ commented 7 months ago

techincally you can already use it by replacing the URL and header - https://docs.flowiseai.com/integrations/langchain/chat-models/azure-chatopenai#custom-base-url-and-headers

Greatz08 commented 7 months ago

@HenryHengZJ with chatopenAI Custom component of flowise we can ise litellm too :-)).The mistake i was doing and because of that it wasn't working for me was that i was using url like this - http://0.0.0.0:8807 (randomly generated by litellm every time we create new proxy server) in base url of Advance section of the chatopenAI component.But What we need to do is either make the local proxy server publicly accessible with cloudflare tunnel or i dont remember but there was i guess one more service of cloudflare to do make local port/service publicly accessible and use that url in base url OR what else you can use is reverse proxy manager like nginx proxy manager or traefik or caddy either locally/selfhosted way in which i did personally and follow further steps or in purchased server you can setup these reverse proxy manager and then run litellm in server and use port of litellm in those reverse proxy manager with own domain and then use that domain in base url then we can easily communicate with litellm proxy server tested by me personally. Just wanted to share here my mistake and solution for it so that people dont have to suffer or waste their time.Maybe you can make guide or documentation especially for litellm because it is becoming popular project and will be used by many in future so one good guide can help many if they are still not able understand what i mean by cloudflare tunnels or reverse proxy manager stuff :-)) if you want to add something here then please do so and then you can close the comment :-))

instplanet commented 2 months ago

@HakaishinShwet How did you change the name of the model in the input field? I am using Llama 3.1 through LiteLLM, and I'm not able to select any model other than the ChatGPT ones.