Closed Mgrsc closed 1 week ago
👀 @Mgrsc
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
I found a similar closed issue, [Bug] 卡顿非常严重, which was last updated on August 10, 2024. The suggested solution was to open a new issue and provide a video of the lagging problem for targeted troubleshooting [1].
Additionally, there are recent changes in the codebase that could affect performance, especially related to the GPT-4 model integration. The addition of the GPT-4o model and updates to perplexity models and the 01.AI model list could impact performance [2][3].
Lobe Chat handles large outputs from AI models by managing the max_tokens
parameter in the agent configuration. For example, the gpt-4-vision-preview
model sets max_tokens
to 2048 to ensure the output is manageable. Additionally, an abort controller is used to manage the streaming of assistant messages, which can throttle or stop the generation of messages if needed [4][5][6].
For Docker environments, specific configurations such as port mapping, environment variables, and proxy settings can impact performance. Ensure the default mapped port 3210
is not occupied, and properly configure environment variables like OPENAI_API_KEY
and ACCESS_CODE
. If using a proxy, set the OPENAI_PROXY_URL
environment variable correctly [7][8][9].
Consider these points to troubleshoot and potentially resolve the performance issues you're experiencing.
To continue talking to Dosu, mention @dosu.
✅ @Mgrsc
This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。
📦 Environment
Docker
📌 Version
v1.15.18
💻 Operating System
Other Linux
🌐 Browser
Edge
🐛 Bug Description
gpt-4o吐字一大块一大块,输出过多整个页面直接卡死,要等到输出完才会恢复