vercel / ai-chatbot

A full-featured, hackable Next.js AI chatbot built by Vercel
https://chat.vercel.ai
Other
5.53k stars 1.63k forks source link

ai/rsc with langchain #288

Open rogerodipo opened 3 months ago

rogerodipo commented 3 months ago

In the docs, the only example we have using langchain is getting the streamingTextResponse from the api/chat route and using it in the frontend using useChat.

https://sdk.vercel.ai/docs/guides/providers/langchain

How can we do that without useChat, using ai/rsc in the current version of the bot's render method?

nikohann commented 3 months ago

I have used the submitUserMessage method with langchain's streamEvents. I will maybe provide code example later.

rogerodipo commented 3 months ago

Hey @nikohann , Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot. Thanks.

nikohann commented 3 months ago

Hey @nikohann , Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot. Thanks.

There is couple serious bugs but I think you will find it out.

https://js.langchain.com/docs/expression_language/streaming#event-reference

I have used streamEvents with streaming output as json format from function calling.

async function submitUserMessage(content: string) {
  'use server'

  const aiState = getMutableAIState<typeof AI>()

  aiState.update({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        id: nanoid(),
        role: 'user',
        content
      }
    ]
  })

  // Langchain

  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are helpful assistant. Be positive and speak about unicorns."
    ],
    ["human", "{input}"],
  ]);

  const llm = new ChatOpenAI({
    modelName: "gpt-4-0125-preview",
    streaming: true,
    temperature: 0.4,
  });

  const chain = prompt.pipe(llm);

  let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
  let textNode: undefined | React.ReactNode

  runAsyncFnWithoutBlocking(async () => {

    if (!textStream) {
      textStream = createStreamableValue('')
      textNode = <BotMessage content={textStream.value} />
    }

    const response = chain.streamEvents({
      input: content,
    }, { version: "v1" })

    for await (const event of response) {
      const eventType = event.event;

      if (eventType === "on_chain_stream") {
        textStream.update(event.data.chunk.content);
      } else if (eventType === "on_llm_end") {
        const message = event.data.output.generations[0][0].text;

        textStream.done();

        aiState.done({
          ...aiState.get(),
          messages: [
            ...aiState.get().messages,
            {
              id: nanoid(),
              role: 'assistant',
              content: message
            }
          ]
        })

      }

    }

  })

  return {
    id: nanoid(),
    display: textNode
  }

}
AmmarByFar commented 1 month ago

Hey @nikohann , Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot. Thanks.

There is couple serious bugs but I think you will find it out.

https://js.langchain.com/docs/expression_language/streaming#event-reference

I have used streamEvents with streaming output as json format from function calling.

async function submitUserMessage(content: string) {
  'use server'

  const aiState = getMutableAIState<typeof AI>()

  aiState.update({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        id: nanoid(),
        role: 'user',
        content
      }
    ]
  })

  // Langchain

  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are helpful assistant. Be positive and speak about unicorns."
    ],
    ["human", "{input}"],
  ]);

  const llm = new ChatOpenAI({
    modelName: "gpt-4-0125-preview",
    streaming: true,
    temperature: 0.4,
  });

  const chain = prompt.pipe(llm);

  let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
  let textNode: undefined | React.ReactNode

  runAsyncFnWithoutBlocking(async () => {

    if (!textStream) {
      textStream = createStreamableValue('')
      textNode = <BotMessage content={textStream.value} />
    }

    const response = chain.streamEvents({
      input: content,
    }, { version: "v1" })

    for await (const event of response) {
      const eventType = event.event;

      if (eventType === "on_chain_stream") {
        textStream.update(event.data.chunk.content);
      } else if (eventType === "on_llm_end") {
        const message = event.data.output.generations[0][0].text;

        textStream.done();

        aiState.done({
          ...aiState.get(),
          messages: [
            ...aiState.get().messages,
            {
              id: nanoid(),
              role: 'assistant',
              content: message
            }
          ]
        })

      }

    }

  })

  return {
    id: nanoid(),
    display: textNode
  }

}

Yea I think I'm running into some real strange bugs. This works totally fine when running locally but as soon as I push it to production it stops working. For some reason production doesn't seem to be streaming the results...

Not sure what's going on

elvenking commented 1 month ago

@AmmarByFar @nikohann Hi guys, have you figured out some stable solution ? Thank you