Open xiaozhi-ai20 opened 2 weeks ago
To configure the Azure OpenAI key and base_url
in the .env
file for a locally deployed vllm OpenAI interface, you need to set the following environment variables:
AZURE_OPENAI_API_KEY=<your_azure_openai_api_key>
AZURE_OPENAI_ENDPOINT=<https://YOUR_RESOURCE_NAME.openai.azure.com/>
OPENAI_API_VERSION=<your_api_version>
These variables correspond to the api_token
, azure_endpoint
, and api_version
parameters in the AzureOpenAI
class [1][2].
Regarding the error message about the missing .env
file, ensure that the .env
file is present in the server
directory. If it's missing, you can create it by copying the .env.example
file provided in the repository:
Navigate to the root directory of the cloned PandasAI repository:
cd pandas-ai
Copy the .env.example
file to .env
in the server directory:
cp server/.env.example server/.env
Edit the server/.env
file to update the necessary environment variables, including your Azure OpenAI configuration [3].
To continue talking to Dosu, mention @dosu.
🚀 The feature
error message: PS E:\users\xxx\Desktop\xxx_code> cd .\study\llm\pandas-ai-main\ PS E:\users\xxx\Desktop\xxx_code\study\llm\pandas-ai-main> docker-compose build Failed to load E:\users\xxx\Desktop\xxx_code\study\llm\pandas-ai-main\server.env: open E:\users\xxx\Desktop\xxx_code\study\llm\pandas-ai-main\server.env: The system cannot find the file specified.
I want to use a locally deployed vllm openai interface for a large model. How to configure it in the. env file?
Motivation, pitch
I want to use a locally deployed vllm openai interface for a large model. How to configure it in the. env file?
Alternatives
No response
Additional context
No response