danny-avila / LibreChat

Enhanced ChatGPT Clone: Features OpenAI, Assistants API, Azure, Groq, GPT-4 Vision, Mistral, Bing, Anthropic, OpenRouter, Vertex AI, Gemini, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. More features in development
https://librechat.ai/
MIT License
15.54k stars 2.58k forks source link

[Bug]: Azure Assistant not working with default `.env` #2172

Closed fkohrt closed 3 months ago

fkohrt commented 3 months ago

What happened?

The default .env contains the line ASSISTANTS_API_KEY=user_provided. When pre-configuring Azure OpenAI models, this setting makes it impossible to use assistants due to a missing user provided key. Only by commenting the line out the Azure setup works.

Steps to Reproduce

Configure Azure OpenAI models with assistants enabled. Try to use the assistant builder to create a new assistant, see the error message.

What browsers are you seeing the problem on?

No response

Relevant log output

No response

Screenshots

No response

Code of Conduct

danny-avila commented 3 months ago

You don't need to use the environment variables for Azure Assistants at all (also you don't need to use the default values, you can easily change them).

This is an Assistant I have on Azure, and I do not have any environment variables set for Assistants/Azure.

image

https://docs.librechat.ai/install/configuration/azure_openai.html#using-assistants-with-azure

Here's my working setup with the librechat.yaml file:

version: 1.0.5
cache: true
endpoints:
  azureOpenAI:
    titleModel: "gpt-3.5-turbo-1106"
    titleConvo: true
    assistants: true # <-------- critical
    groups:
    - group: "region-westus"
      apiKey: "${WESTUS_API_KEY}"
      instanceName: "region-westus"
      version: "2024-03-01-preview"
      models:
        gpt-3.5-turbo:
          deploymentName: "gpt-35-turbo"
        gpt-3.5-turbo-1106:
          deploymentName: "gpt-35-turbo-1106"
        gpt-4:
          deploymentName: "gpt-4"
    - group: "region-sweden"
      apiKey: "${SWEDEN_API_KEY}"
      instanceName: "region-sweden"
      deploymentName: gpt-4-1106-preview
      version: "2024-03-01-preview"
      assistants: true # <-------- critical
      models:
        gpt-4-turbo: true
    - group: "azure-openai"
      apiKey: "${AZURE_OAI_API_KEY}"
      instanceName: "azure-openai"
      deploymentName: gpt-4-1106-preview
      version: "2024-03-01-preview"
      assistants: true # <-------- critical
      models:
        gpt-4-1106-preview: true
        gpt-4-vision-preview:
          deploymentName: gpt-4-vision-preview

list of assistants

image

Chatting with supported assistant

image

Also make sure to use docker compose pull if using docker to pull the latest built image.

fkohrt commented 3 months ago

I am not sure I understand what you are saying. At first, I configured Azure OpenAI as follows:

.env ```.env #=====================================================================# # LibreChat Configuration # #=====================================================================# # Please refer to the reference documentation for assistance # # with configuring your LibreChat environment. The guide is # # available both online and within your local LibreChat # # directory: # # Online: https://docs.librechat.ai/install/configuration/dotenv.html # # Locally: ./docs/install/configuration/dotenv.md # #=====================================================================# #==================================================# # Server Configuration # #==================================================# HOST=localhost PORT=3080 MONGO_URI=mongodb://127.0.0.1:27017/LibreChat DOMAIN_CLIENT=[REDACTED] DOMAIN_SERVER=[REDACTED] NO_INDEX=true #===============# # Debug Logging # #===============# DEBUG_LOGGING=true DEBUG_CONSOLE=false #=============# # Permissions # #=============# # UID=1000 # GID=1000 #===============# # Configuration # #===============# # Use an absolute path, a relative path, or a URL # CONFIG_PATH="/alternative/path/to/librechat.yaml" #===================================================# # Endpoints # #===================================================# # ENDPOINTS=openAI,assistants,azureOpenAI,bingAI,google,gptPlugins,anthropic ENDPOINTS=azureOpenAI,gptPlugins,assistants PROXY= #===================================# # Known Endpoints - librechat.yaml # #===================================# # https://docs.librechat.ai/install/configuration/ai_endpoints.html # GROQ_API_KEY= # SHUTTLEAI_KEY= # OPENROUTER_KEY= # MISTRAL_API_KEY= # ANYSCALE_API_KEY= # FIREWORKS_API_KEY= # PERPLEXITY_API_KEY= # TOGETHERAI_API_KEY= #============# # Anthropic # #============# ANTHROPIC_API_KEY=user_provided # ANTHROPIC_MODELS=claude-3-opus-20240229,claude-3-sonnet-20240229,claude-2.1,claude-2,claude-1.2,claude-1,claude-1-100k,claude-instant-1,claude-instant-1-100k # ANTHROPIC_REVERSE_PROXY= #============# # Azure # #============# # Note: these variables are DEPRECATED # Use the `librechat.yaml` configuration for `azureOpenAI` instead # You may also continue to use them if you opt out of using the `librechat.yaml` configuration # AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo # Deprecated # AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4 # Deprecated # AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE # Deprecated # AZURE_API_KEY= # Deprecated # AZURE_OPENAI_API_INSTANCE_NAME= # Deprecated # AZURE_OPENAI_API_DEPLOYMENT_NAME= # Deprecated # AZURE_OPENAI_API_VERSION= # Deprecated # AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME= # Deprecated # AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME= # Deprecated # PLUGINS_USE_AZURE="true" # Deprecated #============# # BingAI # #============# BINGAI_TOKEN=user_provided # BINGAI_HOST=https://cn.bing.com #============# # Google # #============# GOOGLE_KEY=user_provided # GOOGLE_MODELS=gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k # GOOGLE_REVERSE_PROXY= #============# # OpenAI # #============# OPENAI_API_KEY=user_provided # OPENAI_MODELS=gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k DEBUG_OPENAI=false # TITLE_CONVO=false # OPENAI_TITLE_MODEL=gpt-3.5-turbo # OPENAI_SUMMARIZE=true # OPENAI_SUMMARY_MODEL=gpt-3.5-turbo # OPENAI_FORCE_PROMPT=true # OPENAI_REVERSE_PROXY= # OPENAI_ORGANIZATION= #====================# # Assistants API # #====================# ASSISTANTS_API_KEY=user_provided # <----- for me, this needs to be commented out for Azure Assistents to work # ASSISTANTS_BASE_URL= # ASSISTANTS_MODELS=gpt-3.5-turbo-0125,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-16k,gpt-3.5-turbo,gpt-4,gpt-4-0314,gpt-4-32k-0314,gpt-4-0613,gpt-3.5-turbo-0613,gpt-3.5-turbo-1106,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview #============# # OpenRouter # #============# # OPENROUTER_API_KEY= #============# # Plugins # #============# # PLUGIN_MODELS=gpt-4,gpt-4-turbo-preview,gpt-4-0125-preview,gpt-4-1106-preview,gpt-4-0613,gpt-3.5-turbo,gpt-3.5-turbo-0125,gpt-3.5-turbo-1106,gpt-3.5-turbo-0613 DEBUG_PLUGINS=true CREDS_KEY=[REDACTED] CREDS_IV=[REDACTED] # Azure AI Search #----------------- AZURE_AI_SEARCH_SERVICE_ENDPOINT= AZURE_AI_SEARCH_INDEX_NAME= AZURE_AI_SEARCH_API_KEY= AZURE_AI_SEARCH_API_VERSION= AZURE_AI_SEARCH_SEARCH_OPTION_QUERY_TYPE= AZURE_AI_SEARCH_SEARCH_OPTION_TOP= AZURE_AI_SEARCH_SEARCH_OPTION_SELECT= # DALL·E #---------------- # DALLE_API_KEY= DALLE3_API_KEY="[REDACTED]" # DALLE2_API_KEY= # DALLE3_SYSTEM_PROMPT= # DALLE2_SYSTEM_PROMPT= # DALLE_REVERSE_PROXY= DALLE3_BASEURL="[REDACTED]" # DALLE2_BASEURL= # DALL·E (via Azure OpenAI) # Note: requires some of the variables above to be set #---------------- DALLE3_AZURE_API_VERSION="2024-02-01" # DALLE2_AZURE_API_VERSION= # Google #----------------- GOOGLE_API_KEY="[REDACTED]" GOOGLE_CSE_ID="[REDACTED]" # SerpAPI #----------------- SERPAPI_API_KEY= # Stable Diffusion #----------------- SD_WEBUI_URL=http://host.docker.internal:7860 # Tavily #----------------- TAVILY_API_KEY= # Traversaal #----------------- TRAVERSAAL_API_KEY= # WolframAlpha #----------------- WOLFRAM_APP_ID=[REDACTED] # Zapier #----------------- ZAPIER_NLA_API_KEY= #==================================================# # Search # #==================================================# SEARCH=true MEILI_NO_ANALYTICS=true MEILI_HOST=http://0.0.0.0:7700 MEILI_MASTER_KEY=[REDACTED] #===================================================# # User System # #===================================================# #========================# # Moderation # #========================# OPENAI_MODERATION=false OPENAI_MODERATION_API_KEY= # OPENAI_MODERATION_REVERSE_PROXY= BAN_VIOLATIONS=true BAN_DURATION=1000 * 60 * 60 * 2 BAN_INTERVAL=20 LOGIN_VIOLATION_SCORE=1 REGISTRATION_VIOLATION_SCORE=1 CONCURRENT_VIOLATION_SCORE=1 MESSAGE_VIOLATION_SCORE=1 NON_BROWSER_VIOLATION_SCORE=20 LOGIN_MAX=7 LOGIN_WINDOW=5 REGISTER_MAX=5 REGISTER_WINDOW=60 LIMIT_CONCURRENT_MESSAGES=true CONCURRENT_MESSAGE_MAX=2 LIMIT_MESSAGE_IP=true MESSAGE_IP_MAX=40 MESSAGE_IP_WINDOW=1 LIMIT_MESSAGE_USER=false MESSAGE_USER_MAX=40 MESSAGE_USER_WINDOW=1 ILLEGAL_MODEL_REQ_SCORE=5 #========================# # Balance # #========================# CHECK_BALANCE=false #========================# # Registration and Login # #========================# ALLOW_EMAIL_LOGIN=true ALLOW_REGISTRATION=false ALLOW_SOCIAL_LOGIN=false ALLOW_SOCIAL_REGISTRATION=false SESSION_EXPIRY=1000 * 60 * 15 REFRESH_TOKEN_EXPIRY=(1000 * 60 * 60 * 24) * 7 JWT_SECRET=[REDACTED] JWT_REFRESH_SECRET=[REDACTED] # Discord DISCORD_CLIENT_ID= DISCORD_CLIENT_SECRET= DISCORD_CALLBACK_URL=/oauth/discord/callback # Facebook FACEBOOK_CLIENT_ID= FACEBOOK_CLIENT_SECRET= FACEBOOK_CALLBACK_URL=/oauth/facebook/callback # GitHub GITHUB_CLIENT_ID= GITHUB_CLIENT_SECRET= GITHUB_CALLBACK_URL=/oauth/github/callback # Google GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= GOOGLE_CALLBACK_URL=/oauth/google/callback # OpenID OPENID_CLIENT_ID= OPENID_CLIENT_SECRET= OPENID_ISSUER= OPENID_SESSION_SECRET= OPENID_SCOPE="openid profile email" OPENID_CALLBACK_URL=/oauth/openid/callback OPENID_BUTTON_LABEL= OPENID_IMAGE_URL= #========================# # Email Password Reset # #========================# EMAIL_SERVICE= EMAIL_HOST= EMAIL_PORT=25 EMAIL_ENCRYPTION= EMAIL_ENCRYPTION_HOSTNAME= EMAIL_ALLOW_SELFSIGNED= EMAIL_USERNAME= EMAIL_PASSWORD= EMAIL_FROM_NAME= EMAIL_FROM=noreply@librechat.ai #========================# # Firebase CDN # #========================# FIREBASE_API_KEY= FIREBASE_AUTH_DOMAIN= FIREBASE_PROJECT_ID= FIREBASE_STORAGE_BUCKET= FIREBASE_MESSAGING_SENDER_ID= FIREBASE_APP_ID= #===================================================# # UI # #===================================================# APP_TITLE=LibreChat # CUSTOM_FOOTER="My custom footer" HELP_AND_FAQ_URL=https://librechat.ai # SHOW_BIRTHDAY_ICON=true #==================================================# # Others # #==================================================# # You should leave the following commented out # # NODE_ENV= # REDIS_URI= # USE_REDIS= # E2E_USER_EMAIL= # E2E_USER_PASSWORD= ```
librechat.yaml ```yaml # For more information, see the Configuration Guide: # https://docs.librechat.ai/install/configuration/custom_config.html # Configuration version (required) version: 1.0.5 # Cache settings: Set to true to enable caching cache: true # Custom nterface configuration interface: # Privacy policy settings privacyPolicy: externalUrl: 'https://librechat.ai/privacy-policy' openNewTab: true # Terms of service termsOfService: externalUrl: 'https://librechat.ai/tos' openNewTab: true # Example Registration Object Structure (optional) # registration: # socialLogins: ['github', 'google', 'discord', 'openid', 'facebook'] # allowedDomains: # - "gmail.com" # fileConfig: # endpoints: # assistants: # fileLimit: 5 # fileSizeLimit: 10 # Maximum size for an individual file in MB # totalSizeLimit: 50 # Maximum total size for all files in a single request in MB # supportedMimeTypes: # - "image/.*" # - "application/pdf" # openAI: # disabled: true # Disables file uploading to the OpenAI endpoint # default: # totalSizeLimit: 20 # YourCustomEndpointName: # fileLimit: 2 # fileSizeLimit: 5 # serverFileSizeLimit: 100 # Global server file size limit in MB # avatarSizeLimit: 2 # Limit for user avatar image size in MB # rateLimits: # fileUploads: # ipMax: 100 # ipWindowInMinutes: 60 # Rate limit window for file uploads per IP # userMax: 50 # userWindowInMinutes: 60 # Rate limit window for file uploads per user # Definition of custom endpoints endpoints: assistants: # disableBuilder: false # Disable Assistants Builder Interface by setting to `true` # pollIntervalMs: 750 # Polling interval for checking assistant updates # timeoutMs: 180000 # Timeout for assistant operations # # Should only be one or the other, either `supportedIds` or `excludedIds` # supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"] # # excludedIds: ["asst_excludedAssistantId"] # # (optional) Models that support retrieval, will default to latest known OpenAI models that support the feature # retrievalModels: ["gpt-4-turbo-preview"] # # (optional) Assistant Capabilities available to all users. Omit the ones you wish to exclude. Defaults to list below. capabilities: ["code_interpreter", "actions", "tools"] # As of March 14th 2024, retrieval and streaming are not supported through Azure OpenAI; see https://docs.librechat.ai/install/configuration/azure_openai.html#using-assistants-with-azure azureOpenAI: titleModel: "gpt-4-1106-preview" plugins: true assistants: true summarize: true summarizeModel: "gpt-4-1106-preview" groups: - group: "group1" apiKey: "[REDACTED]" instanceName: "[REDACTED]" assistants: true version: "2024-02-15-preview" models: gpt-4-turbo: deploymentName: "[REDACTED]" gpt-4-vision-preview: deploymentName: "[REDACTED]" # custom: # # Groq Example # - name: 'groq' # apiKey: '${GROQ_API_KEY}' # baseURL: 'https://api.groq.com/openai/v1/' # models: # default: ['llama2-70b-4096', 'mixtral-8x7b-32768', 'gemma-7b-it'] # fetch: false # titleConvo: true # titleModel: 'mixtral-8x7b-32768' # modelDisplayLabel: 'groq' # # # Mistral AI Example # - name: 'Mistral' # Unique name for the endpoint # # For `apiKey` and `baseURL`, you can use environment variables that you define. # # recommended environment variables: # apiKey: '${MISTRAL_API_KEY}' # baseURL: 'https://api.mistral.ai/v1' # # # Models configuration # models: # # List of default models to use. At least one value is required. # default: ['mistral-tiny', 'mistral-small', 'mistral-medium'] # # Fetch option: Set to true to fetch models from API. # fetch: true # Defaults to false. # # # Optional configurations # # # Title Conversation setting # titleConvo: true # Set to true to enable title conversation # # # Title Method: Choose between "completion" or "functions". # # titleMethod: "completion" # Defaults to "completion" if omitted. # # # Title Model: Specify the model to use for titles. # titleModel: 'mistral-tiny' # Defaults to "gpt-3.5-turbo" if omitted. # # # Summarize setting: Set to true to enable summarization. # # summarize: false # # # Summary Model: Specify the model to use if summarization is enabled. # # summaryModel: "mistral-tiny" # Defaults to "gpt-3.5-turbo" if omitted. # # # Force Prompt setting: If true, sends a `prompt` parameter instead of `messages`. # # forcePrompt: false # # # The label displayed for the AI model in messages. # modelDisplayLabel: 'Mistral' # Default is "AI" when not set. # # # Add additional parameters to the request. Default params will be overwritten. # # addParams: # # safe_prompt: true # This field is specific to Mistral AI: https://docs.mistral.ai/api/ # # # Drop Default params parameters from the request. See default params in guide linked below. # # NOTE: For Mistral, it is necessary to drop the following parameters or you will encounter a 422 Error: # dropParams: ['stop', 'user', 'frequency_penalty', 'presence_penalty'] # # # OpenRouter Example # - name: 'OpenRouter' # # For `apiKey` and `baseURL`, you can use environment variables that you define. # # recommended environment variables: # # Known issue: you should not use `OPENROUTER_API_KEY` as it will then override the `openAI` endpoint to use OpenRouter as well. # apiKey: '${OPENROUTER_KEY}' # baseURL: 'https://openrouter.ai/api/v1' # models: # default: ['gpt-3.5-turbo'] # fetch: true # titleConvo: true # titleModel: 'gpt-3.5-turbo' # # Recommended: Drop the stop parameter from the request as Openrouter models use a variety of stop tokens. # dropParams: ['stop'] # modelDisplayLabel: 'OpenRouter' # See the Custom Configuration Guide for more information: # https://docs.librechat.ai/install/configuration/custom_config.html ```
docker-compose.override.yml ```yml version: '3.4' # Please consult our docs for more info: https://docs.librechat.ai/install/configuration/docker_override.html # TO USE THIS FILE, FIRST UNCOMMENT THE LINE ('services:') # THEN UNCOMMENT ONLY THE SECTION OR SECTIONS CONTAINING THE CHANGES YOU WANT TO APPLY # SAVE THIS FILE AS 'docker-compose.override.yaml' # AND USE THE 'docker compose build' & 'docker compose up -d' COMMANDS AS YOU WOULD NORMALLY DO # WARNING: YOU CAN ONLY SPECIFY EVERY SERVICE NAME ONCE (api, mongodb, meilisearch, ...) # IF YOU WANT TO OVERRIDE MULTIPLE SETTINGS IN ONE SERVICE YOU WILL HAVE TO EDIT ACCORDINGLY # EXAMPLE: if you want to use the config file and the latest numbered release docker image the result will be: # services: # api: # volumes: # - ./librechat.yaml:/app/librechat.yaml # image: ghcr.io/danny-avila/librechat:latest # --------------------------------------------------- services: # USE LIBRECHAT CONFIG FILE api: volumes: - ./librechat.yaml:/app/librechat.yaml - ./api/app/clients/tools/.well-known:/app/api/app/clients/tools/.well-known # # LOCAL BUILD # api: # image: librechat # build: # context: . # target: node # # BUILD FROM LATEST IMAGE # api: # image: ghcr.io/danny-avila/librechat-dev:latest # BUILD FROM LATEST IMAGE (NUMBERED RELEASE) # api: # image: ghcr.io/danny-avila/librechat:latest # # BUILD FROM LATEST API IMAGE # api: # image: ghcr.io/danny-avila/librechat-dev-api:latest # BUILD FROM LATEST API IMAGE (NUMBERED RELEASE) # api: # image: ghcr.io/danny-avila/librechat-api:latest # # ADD MONGO-EXPRESS # mongo-express: # image: mongo-express # container_name: mongo-express # environment: # ME_CONFIG_MONGODB_SERVER: mongodb # ME_CONFIG_BASICAUTH_USERNAME: admin # ME_CONFIG_BASICAUTH_PASSWORD: password # ports: # - '8081:8081' # depends_on: # - mongodb # restart: always # USE MONGODB V4.4.18 - FOR OLDER CPU WITHOUT AVX SUPPORT mongodb: image: mongo:4.4.18 # # DISABLE THE MONGODB CONTAINER - YOU NEED TO SET AN ALTERNATIVE MONGODB URI IN THE .ENV FILE # api: # environment: # - MONGO_URI=${MONGO_URI} # mongodb: # image: tianon/true # command: "" # entrypoint: "" # # EXPOSE MONGODB PORTS - USE CAREFULLY, THIS MAKES YOUR DATABASE VULNERABLE TO ATTACKS # mongodb: # ports: # - 27018:27017 # # DISABLE MEILISEARCH # meilisearch: # profiles: # - donotstart # # EXPOSE MEILISEARCH PORTS - DO NOT USE THE DEFAULT VALUE FOR THE MASTER KEY IF YOU DO THIS # meilisearch: # ports: # - 7700:7700 # # ADD OLLAMA # ollama: # image: ollama/ollama:latest # deploy: # resources: # reservations: # devices: # - driver: nvidia # capabilities: [compute, utility] # ports: # - "11434:11434" # volumes: # - ./ollama:/root/.ollama # # ADD LITELLM BASIC - NEED TO CONFIGURE litellm-config.yaml, ONLY NEED ENV TO ENABLE REDIS FOR CACHING OR LANGFUSE FOR MONITORING # litellm: # image: ghcr.io/berriai/litellm:main-latest # volumes: # - ./litellm/litellm-config.yaml:/app/config.yaml # command: [ "--config", "/app/config.yaml", "--port", "8000", "--num_workers", "8" ] # environment: # REDIS_HOST: redis # REDIS_PORT: 6379 # REDIS_PASSWORD: RedisChangeMe # LANGFUSE_PUBLIC_KEY: pk-lf-RandomStringFromLangfuseWebInterface # LANGFUSE_SECRET_KEY: sk-lf-RandomStringFromLangfuseWebInterface # LANGFUSE_HOST: http://langfuse-server:3000 # # ADD LITELLM CACHING # redis: # image: redis:7-alpine # command: # - sh # - -c # this is to evaluate the $REDIS_PASSWORD from the env # - redis-server --appendonly yes --requirepass $$REDIS_PASSWORD ## $$ because of docker-compose # environment: # REDIS_PASSWORD: RedisChangeMe # volumes: # - ./redis:/data # # ADD LITELLM MONITORING # langfuse-server: # image: ghcr.io/langfuse/langfuse:latest # depends_on: # - db # ports: # - "3000:3000" # environment: # - NODE_ENV=production # - DATABASE_URL=postgresql://postgres:PostgresChangeMe@db:5432/postgres # - NEXTAUTH_SECRET=ChangeMe # - SALT=ChangeMe # - NEXTAUTH_URL=http://localhost:3000 # - TELEMETRY_ENABLED=${TELEMETRY_ENABLED:-true} # - NEXT_PUBLIC_SIGN_UP_DISABLED=${NEXT_PUBLIC_SIGN_UP_DISABLED:-false} # - LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES=${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-false} # db: # image: postgres # restart: always # environment: # - POSTGRES_USER=postgres # - POSTGRES_PASSWORD=PostgresChangeMe # - POSTGRES_DB=postgres # volumes: # - ./postgres:/var/lib/postgresql/data ```

Now with that setup, upon login to LibreChat I see the following error message in the console:

XHR request to /api/assistants?order=asc: 500 Internal Server Error

Response {"message":"Error listing assistants"}

Also, in the logs (docker compose logs | less -R) I can see the following:

LibreChat      | 2024-03-23 20:00:37 error: [/assistants] Error listing assistants User-provided key not found

And the log file logs/error-2024-03-23.log shows this:

{"level":"error","message":"[/assistants] Error listing assistants User-provided key not found","stack":"Error: User-provided key not found\n    at getUserKey (/app/api/server/services/UserService.js:27:11)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async initializeClient (/app/api/server/services/Endpoints/assistants/initializeClient.js:33:18)\n    at async listAssistants (/app/api/server/services/Endpoints/assistants/index.js:19:22)\n    at async Promise.all (index 0)\n    at async listAssistantsForAzure (/app/api/server/services/Endpoints/assistants/index.js:50:27)\n    at async /app/api/server/routes/assistants/assistants.js:160:14"}

I cannot see existing assistants nor can I create new ones using the assistant builder. Only if I comment out the line ASSISTANTS_API_KEY=user_provided within .env, I don't get the error message anymore and can successfully chat with Azure OpenAI assistants. What alternative do you suggest to resolve the error message I get?

fkohrt commented 3 months ago

@danny-avila Maybe you can have a look at this again?

danny-avila commented 3 months ago

@danny-avila Maybe you can have a look at this again?

I don't understand what the issue is with doing this:

Only if I comment out the line ASSISTANTS_API_KEY=user_provided within .env, I don't get the error message anymore and can successfully chat with Azure OpenAI assistants. What alternative do you suggest to resolve the error message I get?

Setting up Azure through the yaml file implies you are providing the credentials, so there is no need to set ASSISTANTS_API_KEY=user_provided

fkohrt commented 3 months ago

My point is that somebody who wants to use Azure OpenAI models needs to know that ASSISTANTS_API_KEY=user_provided is set by default and needs to "unset" it in order for their Azure assistants to work.

The documentation could benefit from telling Azure users that they need to make a change to the default .env, which is what I was trying to implement in #2173.

danny-avila commented 3 months ago

My point is that somebody who wants to use Azure OpenAI models needs to know that ASSISTANTS_API_KEY=user_provided is set by default and needs to "unset" it in order for their Azure assistants to work.

The documentation could benefit from telling Azure users that they need to make a change to the default .env, which what I was trying to implement in #2173.

Oh I see, so sorry to misunderstand you. This makes a lot more sense now, I think I reviewed that PR in haste with a couple other things going on, my bad. Glad you brought this up again. I'll review again and merge