lobehub / lobe-chat

🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
https://chat-preview.lobehub.com
Other
40.63k stars 9.25k forks source link

打开设置页面卡顿[Bug] #2516

Closed BruceLee569 closed 3 months ago

BruceLee569 commented 4 months ago

💻 系统环境

Windows

📦 部署环境

Official Preview

🌐 浏览器

Chrome

🐛 问题描述

每次打开设置都会向服务器发送请求,导致设置页面显示卡顿,没请求必要时能否先显示?

🚦 期望结果

No response

📷 复现步骤

No response

📝 补充信息

No response

ShinChven commented 3 months ago

你是不是整了什么 SSR ?纯SPA应该不会这么卡

lobehubbot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Did you fix some SSR? A pure SPA should not be so stuck

arvinxx commented 3 months ago

你是不是整了什么 SSR ?纯SPA应该不会这么卡

跟SSR没啥关系。 Next 默认的路由体系是 MPA (多页应用),理论上是会比SPA的慢

lobehubbot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Did you fix any SSR? A pure SPA should not be so stuck

It has nothing to do with SSR. Next The default routing system is MPA (Multi-page Application), which is theoretically slower than SPA.

dongjb commented 3 months ago

我部署在新加坡的服务器上,有缓存的情况下访问应该还行,2S左右,但是切换操作的不是很流畅,有卡顿,另外我将代码拉到本地,启动巨慢,是超级慢,点一个页面都得好2、3分钟......

配置:i9Mac 64G内存

lobehubbot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I deployed it on a server in Singapore. With cache, the access should be fine, about 2 seconds. However, the switching operation is not very smooth and there are lags. In addition, I pulled the code locally and the startup was extremely slow. It was super slow. Click Each page takes 2 or 3 minutes...

Configuration: i9Mac 64G memory

ShinChven commented 3 months ago

我部署在新加坡的服务器上,有缓存的情况下访问应该还行,2S左右,但是切换操作的不是很流畅,有卡顿,另外我将代码拉到本地,启动巨慢,是超级慢,点一个页面都得好2、3分钟......

配置:i9Mac 64G内存

新加坡+1,我NextChat和Lobe都在一起。

最近给朋友安利lobe,昨天就告诉我加载不出来。

image
lobehubbot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I deployed it on a server in Singapore. With cache, the access should be fine, about 2 seconds. However, the switching operation is not very smooth and there are lags. In addition, I pulled the code locally and the startup was extremely slow. It was super slow. It takes 2 or 3 minutes to click on a page...

Configuration: i9Mac 64G memory

Singapore +1, I have NextChat and Lobe together.

I recently gave my Amway logo to a friend, but yesterday I was told that it could not be loaded.

image
ShinChven commented 3 months ago

你是不是整了什么 SSR ?纯SPA应该不会这么卡

跟SSR没啥关系。 Next 默认的路由体系是 MPA (多页应用),理论上是会比SPA的慢

要不要用 Vite + React Router 重构一下, 非常丝滑的。挂在 Express 的 middleware 上一样可以搞 MPA 。

https://github.com/ShinChven/druid/blob/main/app/src/views/index.ts

arvinxx commented 3 months ago

@ShinChven 不计划 ,next 比 rr 好多了

lobehubbot commented 3 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@ShinChven not planning

lobehubbot commented 3 months ago

✅ @BruceLee569

This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。

anuxs commented 1 month ago

有缓存的情况下体验还过得去,首次访问的加载速度是真的慢,PC端要1分多钟才渲染完毕。我对比过ChatGPT-Next-Web和chatgpt-web-midjourney-proxy,PC端浏览器他们只需要三四秒,移动端lobe也至少比他们慢一倍,包括PWA应用。推荐给朋友都不好意思,用户体验堪忧呀,希望能多多测试优化一下。

测试用户端:Windows11 Chrome浏览器 测试链接:https://ai.jdz-ceramic.com/chat https://ai1.jdz-ceramic.com/ https://ai2.jdz-ceramic.com/ 测试服务器:搬瓦工2H2G 服务器部署方式:docker镜像

1716173717854-1716173266494-20240520_104413.mp4

卡顿非常影响用户体验。 虽然卡顿的issue被关闭了好几个,但是今天2024-07-28为止,首次访问卡顿的问题并没有改善,首页几分钟才能进去。我部署在vercel,一段时间不请求,服务端自动把next.js实例卸载了,那么再次访问又是首次访问首页,又要等几分钟才能开始使用。 有时候就是几秒钟有那么一个想法要问一下AI,打开网页等个几分钟正是会打断思路,有这个等的时间,干点别的事情之后,又会忘记刚才自己想要打开lobechat要干嘛了?

lobehubbot commented 1 month ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


With cache, the experience is passable. The loading speed for the first visit is really slow. It takes more than a minute to render on the PC side. I have compared ChatGPT-Next-Web and chatgpt-web-midjourney-proxy. They only take three or four seconds for PC browsers, and mobile lobe is at least twice as slow as them, including PWA applications. I'm embarrassed to recommend it to my friends. The user experience is worrisome. I hope I can test and optimize it more.

Test client: Windows11 Chrome browser test link: https://ai.jdz-ceramic.com/chat https://ai1.jdz-ceramic.com/ https://ai2.jdz-ceramic.com/ test Server: 2H2G Server deployment method: docker image

1716173717854-1716173266494-20240520_104413.mp4

Stuttering greatly affects user experience. Although several lag issues have been closed, as of today 2024-07-28, the issue of lag when accessing for the first time has not improved, and it takes several minutes to enter the home page. I deploy it in vercel. If there is no request for a period of time, the server automatically uninstalls the next.js instance. Then the next time I visit it is the first time I visit the homepage, and I have to wait a few minutes before I can start using it. Sometimes, just for a few seconds, I have an idea to ask the AI. Waiting for a few minutes to open a webpage will interrupt my train of thought. During this waiting time, I will forget that I just wanted to open Leoachat after doing other things. What are you going to do?

ShinChven commented 1 month ago

有缓存的情况下体验还过得去,首次访问的加载速度是真的慢,PC端要1分多钟才渲染完毕。我对比过ChatGPT-Next-Web和chatgpt-web-midjourney-proxy,PC端浏览器他们只需要三四秒,移动端lobe也至少比他们慢一倍,包括PWA应用。推荐给朋友都不好意思,用户体验堪忧呀,希望能多多测试优化一下。 测试用户端:Windows11 Chrome浏览器 测试链接:https://ai.jdz-ceramic.com/chat https://ai1.jdz-ceramic.com/ https://ai2.jdz-ceramic.com/ 测试服务器:搬瓦工2H2G 服务器部署方式:docker镜像 1716173717854-1716173266494-20240520_104413.mp4

卡顿非常影响用户体验。 虽然卡顿的issue被关闭了好几个,但是今天2024-07-28为止,首次访问卡顿的问题并没有改善,首页几分钟才能进去。我部署在vercel,一段时间不请求,服务端自动把next.js实例卸载了,那么再次访问又是首次访问首页,又要等几分钟才能开始使用。 有时候就是几秒钟有那么一个想法要问一下AI,打开网页等个几分钟正是会打断思路,有这个等的时间,干点别的事情之后,又会忘记刚才自己想要打开lobechat要干嘛了?

我最近肉翻了以后,感觉速度已经很好了。归根结底还是要感谢祖国。

lobehubbot commented 1 month ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


With cache, the experience is passable. The loading speed on the first visit is really slow. It takes more than a minute to render on the PC side. I have compared ChatGPT-Next-Web and chatgpt-web-midjourney-proxy. They only take three or four seconds for PC browsers, and mobile lobe is at least twice as slow as them, including PWA applications. I'm embarrassed to recommend it to my friends. The user experience is worrisome. I hope I can test and optimize it more. Test client: Windows11 Chrome browser test link: https://ai.jdz-ceramic.com/chat https://ai1.jdz-ceramic.com/ https://ai2.jdz-ceramic.com/ Test server: 2H2G Server deployment method: docker image 1716173717854-1716173266494-20240520_104413.mp4

Stuttering greatly affects user experience. Although several lag issues have been closed, as of today 2024-07-28, the issue of lag when accessing for the first time has not improved, and it takes several minutes to enter the homepage. I deploy it in vercel. If there is no request for a period of time, the server automatically uninstalls the next.js instance. Then the next time I visit it is the first time I visit the homepage, and I have to wait a few minutes before I can start using it. Sometimes, just for a few seconds, I have an idea to ask AI. Waiting for a few minutes to open a web page will interrupt my train of thought. During this waiting time, I will forget that I just wanted to open lobechat after doing other things. What are you going to do?

After I turned the meat recently, I feel that the speed is already very good. In the final analysis, we still have to thank our motherland.