vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.56k stars 1.41k forks source link

Add new field `isPending` in useChat #752

Open himself65 opened 10 months ago

himself65 commented 10 months ago

Feature Description

isLoading is not enough for some products since its lifecycle is during the whole text streaming.

As shown in the figure, I hope there is a new field called isPending to show that the server is processing the request and false once after the streaming has started. image

Use Case

my downstream PR that fakes a isPending field using useDeferredValue.

https://github.com/run-llama/LlamaIndexTS/pull/203

const content = lastMessage?.content ?? "";
const deferredContent = useDeferredValue(content);
const isPending = props.isLoading ? !isLastMessageFromAssistant : (
  props.isLoading &&
  isLastMessageFromAssistant && content === deferredContent
);

Additional context

No response

tgonzales commented 10 months ago

isPending = waiting IsLoading = responding

himself65 commented 10 months ago

isPending = waiting IsLoading = responding

Yeah, that's true

chanmathew commented 10 months ago

Was just strugging to implement this a few days ago, would love to see this added as well 👍

himself65 commented 10 months ago

better solution:

// `isPending` indicate
// that stream response is not yet received from the server,
// so we show a loading indicator to give a better UX.
const isPending = props.isLoading && !isLastMessageFromAssistant;
j-d90 commented 9 months ago

I'm doing the following:

const { messages, handleInputChange, handleSubmit } = useChat();

...

// for displaying loading icon before stream response is received
useEffect(() => {
  if (messages.length % 2 === 1) {
    setPending(true);
  } else {
    setPending(false);
  }
}, [messages]);

I believe it works assuming there's always a user message in between every streamed message. If the agent is going to speak first in the conversation, then you could use messages.length % 2 === 0 in the useEffect instead. Obviously it's just a workaround until the library provides a solution.

trulymittal commented 7 months ago

I am late to reply, but here is my iteration:

  const isPending = useMemo(() => messages.length % 2 === 1, [messages])
ghoshsanjoy78 commented 1 month ago

the isPending calculation is problematic because sometimes the AI returns more than one message so the division by 2 goes out of whack. Anyone found a better way ?

lgrammel commented 1 month ago

@ghoshsanjoy78 you can check if all of the following are true: isLoading, last message is assistant message, last message has some content (text, tool call -- text will prob be enough for most use cases)

ghoshsanjoy78 commented 1 month ago

Thanks @lgrammel but checking if isLoading is true does not help since isLoading remains true during the entire duration of streaming. What I want to do is show a "..." thinking bubble while the tool calls are being made but once the response starts streaming, want to remove the "..." bubble. isLoading does not help with this since it remains true for the entire duration of stream.

The original diagram from @himself65 explained this best - streaming

lgrammel commented 1 month ago

sorry there was a typo in my msg, here is what i meant

you can check if ALL of the following are true:

ghoshsanjoy78 commented 1 month ago

Yes - that worked! Thanks!

yuriipalam commented 3 weeks ago

@lgrammel are you planning to resolve this issue in the near future?

But to be honest, this still needs some adjustments. With isPending we can display a loading indicator without blocking the UI, it's great. However, once isPending is false then the loading spinner disappears immediately and the actual message is shown. It's not a seamless behaviour, so obviously not the best solution.

Users are more used to ChatGPT's behaviour, where initially a message with avatar and loading indicator is shown, then text is being streamed. It's a seamless approach. But it's just impossible to achieve the same behaviour with current implementation. It's possible to mimic it, but it won't be seamless. Even though the seam isn't visible to the user, it's still not the same...

ghoshsanjoy78 commented 3 weeks ago

Agree with @yuriipalam. This should be fixed natively in the SDK as the behaviour is still not ideal.

@lgrammel the solution you proposed last still has issues since it waits on the first message from the assistant to come back before showing spinner (isPending). That results in a "delay".. sometimes upto 1-2 seconds as the tool makes its roundtrip from the server with the streaming assistant message.

From a user perspective, the behaviour they see is - Send a message, 1-2 second delay as nothing happens, then spinner shows up, then content streams. That delay when nothing happens is not ideal.

It would be great to fix this natively in the SDK itself as per the original proposal by @himself65

lgrammel commented 3 weeks ago

@ghoshsanjoy78 even with tool call streaming? https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-with-tool-calling#tool-call-streaming

yuriipalam commented 3 weeks ago

@lgrammel but this is unstable experimental API, why would you force users to use something that isn't ready? To be honest, I just switched to using Web APIs directly instead of abstracting them, and feel much happier. It wasn't a big deal actually, so I don't understand why you release a library which is quite limited in some aspects and definitely not ready for production use.

ghoshsanjoy78 commented 2 weeks ago

Sorry I was out for a few days and just checked this thread.

@lgrammel - Yes adding that experimental feature did help! It's much faster now. Thanks

@yuriipalam - Nobody is forcing anyone :-) @lgrammel made a suggestion and I fully understand its an experimental feature and using at my own risk. Sure, I could switch to using direct api calls but I appreciate the effort here to create a wrapper that makes it so much easier to skip all the boilerplate stuff and go directly to building apps.