Closed Ps7ch3 closed 10 months ago
I believe the root cause is that Azure gpt-4 turbo did not assign a new model name and only updated the model verison to "1106-preview"
See docs for details
And current gpt-4 base model max token is 8K
This is a known issue. Because the gpt-4-preview has a
Input: 128,000
Output: 4096
But we haven't split them into two parts in the backend code.
This is a known issue. Because the gpt-4-preview has a
Input: 128,000 Output: 4096
But we haven't split them into two parts in the backend code.
I simply caught the raised exception as a temp workaround
Any plan to add 'gpt-4-turbo' option to the base model selection or custom max model token limit in the future?
Yes.
Dify version
0.3.32
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Long user input message or prompt with Azure GPT-4 (1106-preview)
✔️ Expected Behavior
Add extra model option for Azure GPT-4 turbo
❌ Actual Behavior