SpellcraftAI / nextjs-openai

Hooks and components for working with OpenAI streams.
https://nextjs-openai.vercel.app
MIT License
239 stars 18 forks source link

the `useTextBuffer` hook is causing an infinate loop #4

Closed Johnrobmiller closed 1 year ago

Johnrobmiller commented 1 year ago

I was using the useTextBuffer in my next.js page, and I noticed a bug that seemed important. Not only was the component rerendering in an infanite loop (tested via logging), but also, the endpoint was also getting hit up everytime it rerendered.

I bebugged by trying the following:

ctjlewis commented 1 year ago

I cannot repro on the demo page: https://nextjs-openai.vercel.app

I believe that React is a horrible technology and the worst thing to happen to the web. It is likely just some psychedelic hook bullshit in your app. If it's in our library (super possible), since I can't see it, I can't fix it. PR or repro would be needed.

Make sure nothing you're passing to useTextBuffer() is updating each render.

ctjlewis commented 1 year ago

Will reopen if you provide a repository demonstrating the bug, just let me know.

ctjlewis commented 1 year ago

Ah, I found that this does happen when the request fails. This has been fixed in 3.0.9.

benmarch commented 1 year ago

This appears to also happen when the data attribute is not a primitive (even with 3.0.9). For example, when I do the following it goes into an infinite loop:

const { buffer, done } = useTextBuffer({ url: '/api/gpt', data: { prompt } })

This is happening because data is in the dependency array of the useEffect in useBuffer. Whenever my component rerenders, it creates a new object ({ prompt }) and the useEffect in useBuffer reruns, which then dispatches and causes a rerender.

As a side note, I think having the hook fetch immediately makes it difficult to use (it realistically needs to be in a child component that is rendered only after a prompt is provided by the user). To address both concerns, I'd suggest that instead of making the request on mount, return a function similar to refresh() such as fetch() that takes some data and only makes the request when the consumer asks it to. Then the usage would be more like this:

export default function RecipeGenerator() {
  const [prompt, setPrompt] = useState('')
  const inputRef = useRef<HTMLInputElement>(null)
  const { fetch, buffer, done } = useTextBuffer({ url: '/api/gpt' }) // does not fetch immediately

  useEffect(() => {
    if (prompt) {
     fetch({ prompt }) // only fetches when the button below is clicked and there is a user-provided prompt
  }, [prompt])

  return (
    <div>
      <input ref={inputRef} />
      <button onClick={() => inputRef.current?.value && setPrompt(inputRef.current.value.trim())}>Ask</button>
      <StreamingText buffer={buffer} fade={600} />
    </div>
  )
}

This way, you wouldn't need to include data in the useEffect dependency array causing the infinite loop, and the consumer has more control over the request timing.

Anyway, just wanted to provide additional details about this issue because it ran up my API usage a bit, and I don't want others to have the same issue. I otherwise like the library so far; definitely looking forward to a solution to #8.