we're running librechat on docker using the provided images.
We've noticed that the rag_api is having issues downloading "/encodings/cl100k_base.tiktoken" from "openaipublic.blob.core.windows.net" because it's not using our proxy server.
After searching for a bit we found out that there's a setting for OpenAI that let's you define a proxy but not for other embeddings.
https://github.com/danny-avila/rag_api/blob/main/config.py#L177-L194
Please add the possibility to configure the Proxy for at least AzureOpenAI as well.
EDIT:
Or is there another way to configure rag_api to use a proxy for loading the embeddings for AzureOpenAI?
If so, please let me know :)
I'm not familiar with Python/Langchain and all this, so I'm a bit confused.
-- End edit
Here's the log from the ragapi container:
LOGFILE rag_api ==================================================================================
USER_AGENT environment variable not set, consider setting it to identify your requests.
2024-11-08 08:50:55,373 - root - INFO - Initialized embeddings of type: <class 'langchain_openai.embeddings.azure.AzureOpenAIEmbeddings'>
/app/store_factory.py:22: LangChainPendingDeprecationWarning: Please use JSONB instead of JSON for metadata. This change will allow for more efficient querying that involves filtering based on metadata.Please note that filtering operators have been changed when using JSOB metadata to be prefixed with a $ sign to avoid name collisions with columns. If you're using an existing database, you will need to create adb migration for your metadata column to be JSONB and update your queries to use the new operators.
return AsyncPgVector(
2024-11-08 08:50:55,446 - uvicorn.error - INFO - Started server process [1]
2024-11-08 08:50:55,446 - uvicorn.error - INFO - Waiting for application startup.
2024-11-08 08:50:55,446 - uvicorn.error - INFO - Application startup complete.
2024-11-08 08:50:55,446 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
2024-11-08 08:53:36,089 - root - ERROR - HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fd60796abc0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))
Hello and thanks for the great work so far,
we're running librechat on docker using the provided images. We've noticed that the rag_api is having issues downloading "/encodings/cl100k_base.tiktoken" from "openaipublic.blob.core.windows.net" because it's not using our proxy server. After searching for a bit we found out that there's a setting for OpenAI that let's you define a proxy but not for other embeddings. https://github.com/danny-avila/rag_api/blob/main/config.py#L177-L194
Please add the possibility to configure the Proxy for at least AzureOpenAI as well.
EDIT: Or is there another way to configure rag_api to use a proxy for loading the embeddings for AzureOpenAI? If so, please let me know :) I'm not familiar with Python/Langchain and all this, so I'm a bit confused. -- End edit
Here's the log from the ragapi container: