lobehub / lobe-chat

🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
https://chat-preview.lobehub.com
Other
43.99k stars 9.84k forks source link

[Bug] Docker ISN'T same as yarn dev #1602

Closed zhuozhiyongde closed 7 months ago

zhuozhiyongde commented 7 months ago

💻 系统环境

macOS

📦 部署环境

Docker

🌐 浏览器

Chrome

🐛 问题描述

对于列表的渲染居然不一样!

image

yarn dev 启动的:

image

image

docker 部署的:

image

image

🚦 期望结果

我期望他们一致,且能解决 https://github.com/lobehub/lobe-chat/issues/1161

我本来打算本地尝试修改,但是偶然间发现了这个问题,不是很清楚原因。

https://github.com/lobehub/lobe-chat/commit/be7554957e69e7d6218de37d87d169760956b5bb#commitcomment-139863071

📷 复现步骤

测试文本:

在信息论中,“熵”、“惊异”和“交叉熵”都是衡量信息量的重要概念。

- **熵**(Entropy)是衡量随机变量不确定性的量,定义为该随机变量所有可能值的概率乘以其对数的负值的期望。对于离散随机变量 $X$,其熵 $H(X)$ 可以表示为:$$H(X) = -\sum_{i} p(x_i)\log p(x_i)$$其中,$p(x_i)$ 是随机变量 $X$ 取特定值 $x_i$ 的概率。

- **惊异**(Surprisal 或 Information Content)是指接收到一个特定信息时,这个信息带来的“惊异度”或“信息量”,其大小与该信息发生的概率的负对数成正比。对于事件 $x_i$ 的惊异 $I(x_i)$ 定义为:$$I(x_i) = -\log p(x_i)$$其中,$p(x_i)$ 是事件 $x_i$ 发生的概率。惊异高意味着事件发生的概率低,因此信息量大。

- **交叉熵**(Cross-Entropy)是衡量两个概率分布之间差异的量,特别是在机器学习中,用于衡量实际输出分布与期望输出分布之间的差异。对于离散随机变量,两个概率分布 $p(x)$ 和 $q(x)$ 之间的交叉熵 $H(p, q)$ 定义为:$$H(p, q) = -\sum_{i} p(x_i)\log q(x_i)$$其中,$p(x_i)$ 是真实概率分布中事件 $x_i$ 发生的概率,而 $q(x_i)$ 是另一个概率分布(例如,模型预测的分布)中事件 $x_i$ 发生的概率。

总结来说,惊异是衡量接收到特定信息时的信息量,其大小与该信息发生的概率的负对数成正比。

📝 补充信息

我使用 git reset --hard origin/main 已经同步到最新 commit.

lobehubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


💻 System environment

macOS

📦 Deployment environment

Docker

🌐 Browser

Chrome

🐛 Problem description

The rendering of the list is actually different!

image

yarn dev started:

image

image

Deployed by docker:

image

image

🚦 Expected results

I hope they are consistent and can solve https://github.com/lobehub/lobe-chat/issues/1161

I originally planned to try to modify it locally, but I accidentally discovered this problem and I am not sure why.

https://github.com/lobehub/lobe-chat/commit/be7554957e69e7d6218de37d87d169760956b5bb#commitcomment-139863071

📷 Steps to reproduce

Test text:

In information theory, "entropy", "surprise" and "cross-entropy" are important concepts for measuring the amount of information.

- **Entropy** (Entropy) is a measure of the uncertainty of a random variable, defined as the probability of all possible values ​​of the random variable multiplied by the expectation of the negative logarithm of it. For discrete random variable $X$, its entropy $H(X)$ can be expressed as: $$H(X) = -\sum_{i} p(x_i)\log p(x_i)$$where, $p( x_i)$ is the probability that the random variable $X$ takes a specific value $x_i$.

- **Surprise** (Surprisal or Information Content) refers to the "surprise" or "amount of information" brought by this information when receiving a specific information. Its size is proportional to the negative logarithm of the probability of the information occurring. . Surprise $I(x_i)$ for event $x_i$ is defined as: $$I(x_i) = -\log p(x_i)$$where $p(x_i)$ is the probability of event $x_i$ occurring. High surprise means that the probability of the event occurring is low and therefore the amount of information is high.

- **Cross-Entropy** (Cross-Entropy) is a measure of the difference between two probability distributions, especially in machine learning, used to measure the difference between the actual output distribution and the expected output distribution. For discrete random variables, the cross entropy $H(p, q)$ between two probability distributions $p(x)$ and $q(x)$ is defined as: $$H(p, q) = -\sum_ {i} p(x_i)\log q(x_i)$$where $p(x_i)$ is the probability of event $x_i$ occurring in a true probability distribution, and $q(x_i)$ is another probability distribution (e.g. , the probability of occurrence of event $x_i$ in the distribution predicted by the model).

In summary, surprise is a measure of the amount of information received when a particular piece of information is received, the magnitude of which is proportional to the negative logarithm of the probability of that information occurring.

📝 Supplementary information

I used git reset --hard origin/main to synchronize to the latest commit.

lobehubbot commented 7 months ago

👀 @zhuozhiyongde

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

zhuozhiyongde commented 7 months ago

而且现在似乎无法打包:

image

zhuozhiyongde commented 7 months ago

这次 commit 更新后,列表、代码块的强调色都消失了,我怀疑这是一个 bug。

@arvinxx

https://github.com/lobehub/lobe-chat/commit/be7554957e69e7d6218de37d87d169760956b5bb#comments

lobehubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


After this commit update, the accent colors of lists and code blocks disappeared. I suspect this is a bug.

@arvinxx

https://github.com/lobehub/lobe-chat/commit/be7554957e69e7d6218de37d87d169760956b5bb#comments

arvinxx commented 7 months ago

不是,我们调整了md的渲染样式

lobehubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


No, we adjusted the rendering style of md

zhuozhiyongde commented 7 months ago

不是,我们调整了md的渲染样式

但是我在本地yarn dev仍然是原先的强调色啊?而且我觉得确实原先有强调色更好区分一些。

lobehubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


No, we adjusted the rendering style of md

But my local yarn dev still has the original accent color? And I think it is indeed better to distinguish the original accent color.

arvinxx commented 7 months ago

@zhuozhiyongde 你本地 dev 的依赖版本没升级。

而且我觉得确实原先有强调色更好区分一些

原先强调色在很多代码块的场景下看着有点重,有些过于强调了。如果存在这种偏好的话,未来我们在设置里加个配置,支持用户设置好了。

lobehubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@zhuozhiyongde Your local dev dependency version has not been upgraded.

And I think it is indeed easier to distinguish the original accent color.

The original accent color looked a bit heavy in scenes with many code blocks, a bit too emphasized. If this preference exists, we will add a configuration to the settings in the future to support user settings.

lobehubbot commented 7 months ago

✅ @zhuozhiyongde

This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。