langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.4k stars 2.1k forks source link

How to return in a streaming fashion instead of returning a whole readable stream with chatBaiduWenxin model #4034

Closed YL2014 closed 3 months ago

YL2014 commented 8 months ago

When using Wenxin Yiyin, how to return in a streaming fashion instead of returning a whole readable stream. My code is as follows, please take a look.

import { ChatBaiduWenxin } from '@langchain/community/chat_models/baiduwenxin';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const getErnie = (
  apiKey,
  secretKey
) => {
  return new ChatBaiduWenxin({
    modelName: 'ERNIE-Bot-4',
    temperature: 0.8,
    baiduApiKey: apiKey,
    baiduSecretKey: secretKey,
    streaming: true
  });
};

const prompt = ChatPromptTemplate.fromMessages([['user', '{input}']]);

const chatModel = getErnie(apiKey, secretKey);
const chain = prompt.pipe(chatModel);
const res = await chain.stream({ input: '你好,请介绍下你自己,用程序员风格' });

console.log(res)

for await (const chunk of chunks) {
    console.log('-----', JSON.stringify(chunk));
 }

then the log is follows

ReadableStream { locked: false, state: 'readable', supportsBYOB: false }

----- {"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessage"],"kwargs":{"content":"你好,我是一款基于人工智能技术的语言模型,我的中文名是文心一言,英文名是ERNIE Bot,可以协助你完成范围广泛的任务并提供有关各种主题的信息,比如回答问题,提供定义和解释及建议。如果你有任何问题,请随时向我提问。","additional_kwargs":{}}}

please take a look.

dosubot[bot] commented 8 months ago

🤖

你好 @YL2014,欢迎来到 LangChain 讨论区。我是 Dosu,一款可以帮助你解决问题、回答疑问甚至变成贡献者的技术助手。在等待人类维护者的回复期间,我会竭力为你提供帮助。很高兴能够和你相识!

Based on your code, it seems like you have already set the streaming property to true when you instantiate the ChatBaiduWenxin class. This should allow the results to be returned in a streaming fashion.

However, it seems like you are trying to log the entire stream object directly, which is why you are seeing ReadableStream { locked: false, state: 'readable', supportsBYOB: false } in your logs.

To properly handle the stream, you should listen for the data event on the stream and handle each chunk of data as it comes in. Here is an example of how you can do this:

const res = await chain.stream({ input: '你好,请介绍下你自己,用程序员风格' });

res.on('data', (chunk) => {
  console.log('-----', JSON.stringify(chunk));
});

In this example, the data event is emitted whenever a chunk of data is available to be read from the stream. The callback function passed to the on method is called with the chunk of data as its argument. This allows you to handle each chunk of data as it comes in, rather than waiting for the entire stream to be read.

Please try this modification and let me know if it helps. If you have any other questions or run into any issues, feel free to ask.

For more information on handling streams in Node.js, you can refer to the Node.js Stream documentation.

As a side note, the log message ----- {"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessage"],"kwargs":{"content":"你好,我是一款基于人工智能技术的语言模型,我的中文名是文心一言,英文名是ERNIE Bot,可以协助你完成范围广泛的任务并提供有关各种主题的信息,比如回答问题,提供定义和解释及建议。如果你有任何问题,请随时向我提问。","additional_kwargs":{}}} you're seeing is the response from the language model, which is being logged in the for await loop. This is expected behavior when the streaming property is set to true.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

YL2014 commented 8 months ago
Property 'on' does not exist on type 'IterableReadableStream<BaseMessageChunk>'

the stream is an IterableReadableStream, even thouh i cover it to the nodejs stream, it is still a whole readable stream, can't pipe the chunk to the clien.

anurag-pandey-007 commented 8 months ago

@YL2014 I also facing this same issue did you find solution for this ?

movila commented 7 months ago

I got similar error as Property 'values' is missing in type 'IterableReadableStream<BaseMessageChunk>' but required in type 'ReadableStream<any>'

andrewnguonly commented 6 months ago

Based on the implementation of ChatBaiduWenxin, it doesn't seem like streaming response chunks is implemented correctly.

  1. The _generate() function is simply awaiting for a ChatCompletionResponse: https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community/src/chat_models/baiduwenxin.ts#L343
  2. All of the "streaming" logic is implemented in completionWithRetry(): https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community/src/chat_models/baiduwenxin.ts#L460

I would expect an implementation similar to ChatOllama and ChatOpenAI where the method *_streamResponseChunks() is implemented and yields ChatGenerationChunk instances.