chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.83k stars 5.55k forks source link

[BUG] zhipu-api 前端页面显示时少了 一些字符串 #3038

Closed musexiaoluo closed 8 months ago

musexiaoluo commented 8 months ago

image image

dosubot[bot] commented 8 months ago

🤖

你好,@musexiaoluo!我是Dosu,一个可以帮助你解决bug、回答问题并成为贡献者的机器人。在等待人类维护者的同时,我在这里帮助你。我会立刻查看你的问题,并尽快给你一个完整的回答。

musexiaoluo commented 8 months ago

流式访问时

dosubot[bot] commented 8 months ago

🤖

I'm sorry for the inconvenience you're experiencing. Unfortunately, I'm unable to view the screenshots you've provided as I can't process images. However, based on your description, it seems like you're encountering an issue with missing strings on the frontend page during streaming access with the zhipu-api.

This could be due to the asynchronous nature of the file_chat function in the Langchain-Chatchat application. If the stream parameter is set to True, the function will yield the chat responses and the source documents as they become available. However, due to the asynchronous nature of this function, it's possible that some responses or documents are not yet available when the function yields. This could result in some strings missing on the frontend page.

Here is the relevant code:

if stream:
    async for token in callback.aiter():
        # Use server-sent-events to stream the response
        yield json.dumps({"answer": token}, ensure_ascii=False)
    yield json.dumps({"docs": source_documents}, ensure_ascii=False)
else:
    answer = ""
    async for token in callback.aiter():
        answer += token
    yield json.dumps({"answer": answer,
                      "docs": source_documents},
                     ensure_ascii=False)

In the above code, if stream is True, the function will yield each token as it becomes available. However, if the tokens are not yet available, this could result in missing strings on the frontend page.

To fix this issue, you might need to ensure that all tokens are available before yielding the response. This could involve modifying the callback.aiter() function to ensure that all tokens are available before yielding, or modifying the frontend to handle incomplete responses.

You can find this code in the file_chat.py file in the Langchain-Chatchat repository.

I hope this helps! If you need further assistance, please provide more details about the issue.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
Macbook-Specter commented 8 months ago

请问你是用的zhipu embedding嘛,想问一下怎样配置的

zRzRzRzRzRzRzR commented 8 months ago

没有开流就是因为这个问题,默认是非流失的

musexiaoluo commented 8 months ago

没有开流就是因为这个问题,默认是非流失的

您好,有找到原因和解决方法吗?

musexiaoluo commented 8 months ago

zhipu embedding 我也在弄,没成功。我想封装成openai embedding 的格式,但报错了。我在老的版本上改的。你弄成功了 通知我一下

ImmNaruto commented 8 months ago

请问下这个问题有思路了吗,我最近也遇到这个问题尝试解决,目前还没找到方法

zRzRzRzRzRzRzR commented 8 months ago

目前http请求应该是就是有这个问题,只能非流,sdk不会有问题,但是会和本版本0.2.x 依赖冲突

musexiaoluo commented 8 months ago

目前临时解决了。我调试后,发现是fastchat 截掉的字。

临时解决方法: 1、卸载fastchat包。直接用fastchat源码。 2、修改openai_api_server.py。加入以下判断 if model_name=="zhipu-api": delta_text=content["text"]

如图:
image

musexiaoluo commented 8 months ago

请问下这个问题有思路了吗,我最近也遇到这个问题尝试解决,目前还没找到方法

看我后面的评论。

musexiaoluo commented 8 months ago

目前http请求应该是就是有这个问题,只能非流,sdk不会有问题,但是会和本版本0.2.x 依赖冲突

回头可以看以下 sdk的源码,是不是做了特殊字符串处理的操作

ImmNaruto commented 8 months ago

@musexiaoluo 感谢大佬,用第二种方式修改openai server解决了,非常感谢

zRzRzRzRzRzRzR commented 8 months ago

这个可以提个PR,因为目前sdk没问题,主要是兼容上过不去所以没加

zhjjoin commented 4 months ago

目前临时解决了。我调试后,发现是fastchat 截掉的字。

临时解决方法: 1、卸载fastchat包。直接用fastchat源码。 2、修改openai_api_server.py。加入以下判断 if model_name=="zhipu-api": delta_text=content["text"]

如图: image

感谢大佬,用第二种方法,可以解决问题。