Closed abdullahirfandev closed 8 months ago
🤖
Hey @abdullahirfandev! Great to see you diving into another challenge with langchainjs
. Your journey with language models really is turning into an epic saga. How's everything going on your end?
Based on the issue you're facing, it seems like the problem is due to the type mismatch between BytesOutputParser
and conversation_chain.pipe
. The BytesOutputParser
expects a string input, but conversation_chain.pipe
provides ChainValues
type.
To resolve this, you need to convert the ChainValues
type to a string before passing it to BytesOutputParser
. You can do this by creating a custom ChainValueToString
parser that converts ChainValues
to a string.
Here's how you can implement it:
import { ChatOpenAI } from "@langchain/openai";
import { BytesOutputParser, ChainValues } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
class ChainValueToString {
async run(chainValues: ChainValues) {
return JSON.stringify(chainValues);
}
}
const chain = RunnableSequence.from([
new ChatOpenAI({ temperature: 0 }),
new ChainValueToString(),
new BytesOutputParser(),
]);
const stream = await chain.stream("Hello there!");
const decoder = new TextDecoder();
for await (const chunk of stream) {
if (chunk) {
console.log(decoder.decode(chunk));
}
}
In this code, ChainValueToString
is a custom parser that converts ChainValues
to a string. This string is then passed to BytesOutputParser
, resolving the type mismatch issue.
This solution is based on the examples provided in the LangChainJS repository, specifically the bytes_output_parser_sequence.ts file.
As for the BytesOutputParser
, it is a type of OutputParser in the LangChain framework that parses LLMResult (Language Model Result) into the most likely string and then encodes it into bytes. It is used to transform the output of a language model into a byte array (Uint8Array
), which can be useful when the output needs to be sent over a network or stored in a binary format. You can find more details about it in the bytes.ts file in the LangChainJS repository.
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
The new way to do this is here:
https://js.langchain.com/docs/expression_language/how_to/message_history
The output mismatch and complications around streaming are a big reason why we're deprecating those old chains
The guide https://sdk.vercel.ai/docs/guides/providers/langchain provides information on how to stream in NextJS with Langchain using LLM model directly. I am trying to implement same but with Conversation Chain (and definitively planning to explore other options as well) but I am stuck since in the provided example,
BytesOutputParser
is used from@langchain/core/output_parsers
. Below code gives error "Error: failed to pipe response" which is understandable since I need to pipe the chain stream. However I can't useBytesOutputParser
as it only supports string while conversation_chain.pipe hasChainValues
type. So I need help on how to do it and need some reference for future from where I can consult (I found Langchain API docs but couldn't find any plausible solution: https://api.js.langchain.com/index.html). I tried to convert iterablereadablestream to readablestream (functioniterableReadableStreamToReadableStream
code below) but it responds after getting all response which defeats the streaming purpose. I do realize that solution will be somewhere there but can't find out where to check it