Hello, I use Jetbrain IDEA. Here's a bug. When using the custom OpenAI provider(Mistral AI), the model's response gets cut off after I give it a prompt to chat. However, when I close the current chat tab and reopen it from the chat history, the previously cut-off response is shown in full. This issue occurs frequently when using the Google Gemini provider.
Relevant log output or stack trace
No response
Steps to reproduce
Open a chat tab, chat 1.
Type in any prompt.
Send the prompt.
Wait for the response to populate.
Once the response is complete, review the response, but sometimes the response is cut off and the full response is not displayed.
Open a new chat tab, chat 2.
Close the other chat tab, the chat 1.
Open chat history, select the conversation for the chat 1.
Chat 2 tab, that is the chat 1, appears, review the response, the full response is now visible.
What happened?
Hello, I use Jetbrain IDEA. Here's a bug. When using the custom OpenAI provider(Mistral AI), the model's response gets cut off after I give it a prompt to chat. However, when I close the current chat tab and reopen it from the chat history, the previously cut-off response is shown in full. This issue occurs frequently when using the Google Gemini provider.
Relevant log output or stack trace
No response
Steps to reproduce
CodeGPT version
2.12.4-241.1
Operating System
Windows