Closed hooman-bayer closed 1 year ago
Nevermind
According to Langchain simply set the following env variables for embedding models:
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
and similarly for the chat_model
.
Reopening this because prompt playground
is a very useful feature which does not work with Azure OpenAI yet. Any hints how to solve it?
Please see below:
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>
Simply you need to pass that environmental variables to openai.ChatCompletion.create
in case of Azure OpenAI API
:
# Other env variables
openai.api_type = "azure"
openai.api_key ="<KEY>"
openai.api_base = "<URL of Endpoint>"
openai.api_version = "2023-03-15-preview"
openai.ChatCompletion.create(
deployment_id=settings.azure_deployment_id,
model=model,
)
For a quick local hack you can hardcode your deployment_id here https://github.com/Chainlit/chainlit/blob/main/src/chainlit/server.py#L91 in your local installation.
For the real fix, we need to enhance the prompt playground to make it more flexible and compatible with more LLMs.
hi again and thanks again for chainlit! 🥳
Beside OpenAI, a lot of people like me, use Azure OpenAI API (Please see
langchain docs
). But currently it cant be easily used inchainlit
.