FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
30.76k stars 15.98k forks source link

Support for OPENAI_ORGANIZATION in OpenAI Embeddings #868

Open cxadmin opened 1 year ago

cxadmin commented 1 year ago

Description: Currently, when using the OpenAI API, it is possible to use a key associated with an organization by submitting a request to the API with the OPENAI_ORGANIZATION. However, the OpenAI Embeddings feature lacks this option. This is limiting for users who wish to utilize embeddings under their organization's credentials.

Use Case: Organizations with Multiple Developers: Larger organizations with multiple developers prefer to manage their API keys under the organization rather than individual keys. It makes it easier to manage permissions, billing, and track usage. Project-Specific Access: Using organization-wide keys can help in setting project-specific access and limitations, allowing better cost and usage control. Proposed Solution: Extend Embedding Feature: Modify the OpenAI Embeddings interface to support the OPENAI_ORGANIZATION parameter. Documentation Update: Update the OpenAI Embeddings documentation to reflect this change and guide users on how to use the organization keys.

ishaan-jaff commented 10 months ago

@cxadmin

I'm the maintainer of LiteLLM we provide an Open source proxy for load balancing Azure + OpenAI + Any LiteLLM supported LLM It can process (500+ requests/second)

From this thread it looks like you're trying to have a central location to manage keys/deployments.

Our proxy will allow you to set api keys & maximize throughput by load balancing between Azure OpenAI instance - I hope our solution makes it easier for you. (i'd love feedback if you're trying to do this)

Here's the quick start:

Doc: https://docs.litellm.ai/docs/simple_proxy#load-balancing---multiple-instances-of-1-model

Step 1 Create a Config.yaml

model_list:
  - model_name: gpt-4
    litellm_params:
      model: azure/chatgpt-v-2
      api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
      api_version: "2023-05-15"
      api_key: 
  - model_name: gpt-4
    litellm_params:
      model: azure/gpt-4
      api_key: 
      api_base: https://openai-gpt-4-test-v-2.openai.azure.com/
  - model_name: gpt-4
    litellm_params:
      model: azure/gpt-4
      api_key: 
      api_base: https://openai-gpt-4-test-v-2.openai.azure.com/

Step 2: Start the litellm proxy:

litellm --config /path/to/config.yaml

Step3 Make Request to LiteLLM proxy:

curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-4",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ],
    }
'
dkindlund commented 9 months ago

Hey @ishaan-jaff , currently Flowise has no native Chat Model nodes or LLM nodes or Embedding nodes that can integrate with a self-hosted LiteLLM proxy. I'd love for those nodes to be implemented to make it easier to leverage LiteLLM. @HenryHengZJ , is that something anyone on your team might be working on currently?