Open itsamejoshab opened 2 days ago
Please follow the issue template to update description of your issue.
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug]
đŠ Deployment Method
Official installation package
đ Version
2.15.2
đ» Operating System
macOS
đ System Version
14.5
đ Browser
Chrome
đ Browser Version
128.0.6613.138
đ Bug Description
Configuring NextChat packaged client to work with Azure makes the response text get cutoff in the chat.
This was not an issue with prior older versions of the client, but this happens to me after I upgraded to 2.15.2
đ· Recurrence Steps
Model Provider: Azure Azure Endpoint: https://{resource-url}/openai Custom Models: -all,{modelname}@Azure={deploymentName} Max Tokens: 4000 Attached Message Count: 5 History Compression Threshold: 5000 Memory Prompt: yes
đŠ Expected Behavior
The entire response should be shown instead of only about 10-15 tokens
đ Additional Information
No response
I am pretty sure these are network issues due to instability caused by connecting directly to the endpoint. This is unlike using an external endpoint mechanism, which has been very stable (as tested on One Api
hosted on my website https://oneapi.b0zal.io/).
đŠ Deployment Method
Official installation package
đ Version
2.15.2
đ» Operating System
macOS
đ System Version
14.5
đ Browser
Chrome
đ Browser Version
128.0.6613.138
đ Bug Description
Configuring NextChat packaged client to work with Azure makes the response text get cutoff in the chat.
This was not an issue with prior older versions of the client, but this happens to me after I upgraded to 2.15.2
đ· Recurrence Steps
Model Provider: Azure Azure Endpoint: https://{resource-url}/openai Custom Models: -all,{modelname}@azure={deploymentName} Max Tokens: 4000 Attached Message Count: 5 History Compression Threshold: 5000 Memory Prompt: yes
đŠ Expected Behavior
The entire response should be shown instead of only about 10-15 tokens
đ Additional Information
No response