A Solution Accelerator for the RAG pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. This includes most common requirements and best practices.
GPT-4 has a default max tokens of 16, meaning the captions generated were very small. This PR adds the AZURE_OPENAI_MAX_TOKENS to the chat completions call, so it will default to 1000 tokens, and can be conifgured.
Purpose
AZURE_OPENAI_MAX_TOKENS
to the chat completions call, so it will default to 1000 tokens, and can be conifgured.Does this introduce a breaking change?
How to Test