Closed Robert-ZLF closed 5 months ago
Hey @Robert-ZLF! I'm here to assist you with any issues or questions you have regarding LangChain.js. I can help you solve bugs, answer questions, and even guide you on how to become a contributor. Let's take a look at the issue you've posted and see how we can resolve it.
To get the ChatBaiduWenxin
to return the response in a streaming fashion, you need to set the streaming
property to true
in the configuration object and provide a callback function to handle the new tokens as they are streamed. Here is an example configuration:
const config = {
streaming: true,
callbacks: [
{
async handleLLMNewToken(token: string) {
// Handle each new token here
console.log(token);
},
},
],
};
const chat = new ChatBaiduWenxin({
modelName: "ERNIE-Bot",
...config,
});
const messages = [new HumanMessage("Hello!")];
const res = await chat.invoke(messages);
In this example, the handleLLMNewToken
callback function will be called with each new token as it is generated, allowing you to process the response in a streaming fashion.
Additionally, you can use the .stream()
method to handle streaming responses. This method returns a readable stream that you can iterate over. Here is an example:
const stream = chat.stream(messages);
for await (const chunk of stream) {
console.log(chunk);
}
For models that do not support streaming, the entire response will be returned as a single chunk. You can also pipe a chat model into a StringOutputParser
to extract just the raw string values from each chunk or use an HttpResponseOutputParser
to stream bytes directly, such as for returning a stream in an HTTP response [1][2][3].
To continue the conversation, mention @dosu.
ChatBaiduWenxin
does not implement ChatBaiduWenxin.stream
; it will default to calling the parent class and return data in non-streaming mode. If you want to use the stream
mode, you can try this:
let result = ''
const model = new ChatBaiduWenxin({
model: "ERNIE-Bot", // Available models are shown above
streaming: true,
temperature: 1,
baiduApiKey: BAIDU_API_KEY, // In Node.js defaults to process.env.BAIDU_API_KEY
baiduSecretKey: BAIDU_SECRET_KEY, // In Node.js defaults to process.env.BAIDU_SECRET_KEY,
callbacks: [
{
async handleLLMNewToken(token: string) {
result += token;
console.log(`${token}|`);
},
},
],
});
@Robert-ZLF
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Current output is return the entire final results
it should be streaming mode return like this
Description
When using ChatBaiduWenxin, how to return in a streaming fashion instead of returning a whole readable stream.
System Info
langchain 0.2.2 mac