ninehills / langchain-wenxin

langchain baidu wenxinworkshop wrapper
MIT License
68 stars 13 forks source link

对话时使用stream=true,出现重复回复的bug. #5

Closed Chenjm08 closed 1 year ago

Chenjm08 commented 1 year ago

设置streaming=true,如下所示,在聊天时出现重复回复的bug。

ChatWenxin(model=model, temperature=0.5, streaming=True)

之后,将chat_models.py中的一段代码注释掉才正常,但是不确定会不会出现其他问题。如下所示:

async def _agenerate(
        self,
        messages: List[BaseMessage],
        stop: Optional[List[str]] = None,
        run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> ChatResult:
        prompt, history = self._convert_messages_to_prompt(messages)
        params: Dict[str, Any] = {
            "model": self.model,
            "prompt": prompt,
            "history": history, **self._default_params, **kwargs}

        if self.streaming:
            completion = ""
            stream_resp = self.client.acompletion_stream(**params)
            async for data in stream_resp:
                delta = data["result"]
                completion += delta
                if run_manager:
                    await run_manager.on_llm_new_token(
                        delta,
                    )
            stream_resp = self.client.completion_stream(**params)
            # for delta in stream_resp:
            #     result = delta["result"]
            #     completion += result
            #     if run_manager:
            #         await run_manager.on_llm_new_token(
            #             result,
            #         )
        else:
            response = await self.client.acompletion(**params)
            completion = response["result"]
        message = AIMessage(content=completion)
        return ChatResult(generations=[ChatGeneration(message=message)])
ninehills commented 1 year ago

收到,本周内应该会修复

ninehills commented 1 year ago

这个应该之前发帖的时候就已经修复了,用最新版本即可。i