ChatGPTNextWeb / ChatGPT-Next-Web

A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
https://app.nextchat.dev/
MIT License
74.37k stars 58.7k forks source link

[Bug] 回复streaming会断掉 #4359

Open Reekin opened 5 months ago

Reekin commented 5 months ago

Bug Description

今天开始,用claude3和gpt4都是生成到一半就自己断了,甚至重新build了旧版本(2.11.2)也还存在 之前没有遇到过

Steps to Reproduce

发送能让ai能回复得长一点的问题

Expected Behavior

必现:提前中止streaming

Screenshots

image

Deployment Method

Desktop OS

Windows 10

Desktop Browser

Chrome

Desktop Browser Version

122.0.6261.129

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Issues-translate-bot commented 5 months ago

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] Reply streaming will be interrupted

AiharaMahiru commented 5 months ago

same problem

fred-bf commented 5 months ago

are you using the deployment on Vercel or other platform? could you provide the console log from both browser and server side to help troubleshooting the issue?

hiforrest commented 5 months ago

Windows 10 Nextchat 2.11.3 碰到一样的问题,ChatGPT-4 稍微长一点的回答(200 字左右)没回答完就断掉

Issues-translate-bot commented 5 months ago

Bot detected the issue body's language is not English, translate it automatically.


Windows 10 Nextchat 2.11.3 Encountered the same question, ChatGPT-4’s slightly longer answer (about 200 words) was cut off before the answer was finished.

CNYoki commented 5 months ago

Same as yours. Im trying ChatGLM3 API, and when the service is a little busy, chat with Next-Web would be interrupted...

fred-bf commented 4 months ago

@Reekin would you mind try configure the max_tokens config in the settings page? not sure whether the response message is being cut off because of the default max_tokens value

Reekin commented 4 months ago

@Reekin would you mind try configure the max_tokens config in the settings page? not sure whether the response message is being cut off because of the default max_tokens value

my max_tokens was 4000. This issue did not recur later.

shansing commented 2 months ago

我发现流式输出动画有关的代码可能导致输出中断。具体来说是 app/client/platforms/openai.ts (其他平台类似)下有一个 animateResponseText() 函数,它通过 requestAnimationFrame 递归调用自身(requestAnimationFrame(animateResponseText))。在文本比较长的时候,偶尔会一个没有 catch 的错误 Maximum update depth exceeded 抛出,于是相应请求的后续逻辑都不会执行。这个错误似乎是框架的一种防御机制。

简单地将相应报错 catch 起来(印象中最精细的 try{}catch{} 应该是包裹 animateResponseText 调用的 onUpdate 的 get().updateCurrentSession),似乎规避问题,且不影响输出。不过我最后在我的 fork 完全删除了 animateResponseText() 函数,只在收到一条新消息时(onmessage)请求一次动画帧,这样完全避免嵌套。ref: https://github.com/shansing/ChatGPT-Next-Web/commit/d81fdbf1df5485192141ca1ff6efc3f02f037a9b#diff-6de03d672a0a7c506b48a06a6bec2b3763d7e29f9b9c4fbd490fa2e177fbb01fR264

Xeelix commented 1 month ago

Hello! Can the problem be solved? the problem is still reproducing, and the answer is cut off :(