Open MP242 opened 2 months ago
Hey there, @MP242! I'm here to assist you with this bug 🐞.
To get agent.streamEvents
to return a streaming response on the final output of the LLM in your Next.js app using the langgraph
agent with Ollama and the llama3.1 model, you can use the stream
method provided by the ChatOllama
class. Here is an example of how you can implement this:
import { ChatOllama } from '@langchain/ollama';
const llm = new ChatOllama({
model: "llama3.1",
streaming: true,
// other params...
});
const input = `Your input prompt here`;
async function streamLLMResponse() {
for await (const chunk of await llm.stream(input)) {
console.log(chunk);
}
}
streamLLMResponse();
This code sets up the ChatOllama
instance with the llama3.1
model and enables streaming. The streamLLMResponse
function then streams the response chunks from the LLM and logs each chunk to the console. The ChatOllama
class handles the streaming of responses by utilizing the _streamResponseChunks
method, which processes the input messages and streams the response chunks accordingly. The Ollama
class in the langchain
library provides the necessary infrastructure to handle streaming responses, ensuring that each chunk of the response is processed and yielded as it becomes available [1][2][3].
Make sure that your ChatOllama
instance is correctly configured and that the streaming
parameter is set to true
. This should enable the streaming functionality for the final output of the LLM.
To continue talking to Dosu, mention @dosu.
Checked other resources
Example Code
The following code ```typescript
export async function POST(req: Request) { try { const myReq = await req.json(); const { messages } = myReq;
} catch (error) { console.error("Error:", error); return new NextResponse( "Une erreur s'est produite lors du traitement de votre requête.", { status: 500, } ); } }