run-llama / sec-insights

A real world full-stack application using LlamaIndex
https://www.secinsights.ai/
MIT License
2.32k stars 631 forks source link

Update LlamaIndex to v0.9.7 + fix streaming bug #80

Closed sourabhdesai closed 9 months ago

sourabhdesai commented 9 months ago

Some folks on discord have been patiently waiting for me to investigate and fix an issue that occurs when updating the llama-index version. They were seeing the following error on the backend when trying to receive a response to a user message:

Tried sending SubProcess event (source=MessageSubProcessSourceEnum.AGENT_STEP) after channel was closed
Traceback (most recent call last):
  File "/Users/sourabhdesai/workspace/sec-insights/backend/app/chat/messaging.py", line 106, in async_on_event
    await self._send_chan.send(
  File "/Users/sourabhdesai/Library/Caches/pypoetry/virtualenvs/llama-app-backend-X_qql02h-py3.11/lib/python3.11/site-packages/anyio/streams/memory.py", line 213, in send
    self.send_nowait(item)
  File "/Users/sourabhdesai/Library/Caches/pypoetry/virtualenvs/llama-app-backend-X_qql02h-py3.11/lib/python3.11/site-packages/anyio/streams/memory.py", line 195, in send_nowait
    raise ClosedResourceError
anyio.ClosedResourceError

I did some investigation and only found through using a step-by-step debugger that there was an error being thrown on these lines of the OpenAI LLM class which was being swallowed/not being printed out. That error was:

TypeError: AsyncCompletions.create() got an unexpected keyword argument 'api_key'

In a much older version of llama-index, there was a bug where the api_key constructor parameter was being ignored. As a workaround for that, we had passed the api_key in through the additional_kwargs constructor parameter instead. Fast forward to the current version of llama-index, the bug where the api_key constructor param was being ignored has been fixed + now the new OpenAI client library seems to have stricter validation of the types of parameters you can pass to their chat completion APIs. Hence we were seeing this error. The fix was simply to remove the lines here and here where we were passing in the extra API key value in additional_kwargs.

vercel[bot] commented 9 months ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
llama-app-frontend ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 26, 2023 0:22am
byamasu-patrick commented 9 months ago

Hey @sourabhdesai thank you for this. I can confirm that the latest version of LlamaIndex has now been fixed. Good work man.