run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.65k stars 5.25k forks source link

[Bug]: #9405

Closed kindler-king closed 10 months ago

kindler-king commented 11 months ago

Bug Description

Until 4th December, 2023 my app used to work perfectly fine. However, after I restarted my caprover server from AWS on 5th December, 2023 to solve an issue with the deployment. The service which was using Llama broke down with an ERROR: Exception in ASGI application , The code was reaching till before the highlighted part and then failing here for some reason.

When running locally this was working but whenever I am trying to run it through my app, I am getting this ASGI exception in the logs. I would love to hear about any possible solutions as my app has been down for 4 days now due to this issue.

Version

0.7.4

Steps to Reproduce

Capture

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 11 months ago

🤖

Hello @kindler-king! 🖐️

I'm Dosu, an AI assistant here to lend a hand while we wait for a human maintainer. I can help troubleshoot bugs, answer queries, and guide you on your journey to becoming a contributor.

Just to set expectations, while I strive to be as helpful as possible, I'm not a human (and certainly not a comedian, although I try 🤖😉).

I'm taking a look at the issue you've reported and will be back with a more comprehensive response soon. Sit tight!

logan-markewich commented 11 months ago

@kindler-king the issue happens on v0.7.4? Or you updated llama-index and it stopped working?

Which line exactly is breaking, it's not quite clear? Do you know what the underlying error/traceback is or you only know which line breaks?

kindler-king commented 11 months ago

Hello @logan-markewich , this issue happens on v0.7.4 . I did not update llama-index to any newer version yet. This issue started when I restarted the Caprover instance where the LLAMA INDEX code is hosted , to troubleshoot a frozen deployment.

The last line where we want to insert the nodes into the index is the one breaking, with an ASGI application exception. Now, when I run this code snippet in my local instance, this code is working flawlessly and the ASGI exception isn't raised. However, when I am running this through my app, this error is being raised.

The request sent from my app is reaching till that line, however the NODE INSERTION is raising the exception and failing.

logan-markewich commented 11 months ago

Maybe something is wrong with node text or metadata?

When you run locally, is it with the same nodes that the server is using?

kindler-king commented 11 months ago

Yeah @logan-markewich , I am sure it's independent of the node text. And yeah it's the same set of nodes both from the server and locally.

Do you think there's any relation with using an older version of llama-index? I can update the codebase to the latest version, it's just wierd that it works when I test locally but doesn't work when I want to deploy.

Do let me know your thoughts.

logan-markewich commented 11 months ago

Hmmm yea this is super tricky. Without an actually traceback to follow, or a way to reproduce locally, this will be extremely hard to debug.

And that sucks, because 0.7.x -> 0.9.x will probably have a few (minor) changes needed

kindler-king commented 11 months ago

Hey @logan-markewich , apologies for the late update. So after lots of back and forth, I updated to the latest llama-index version and the latest openai version.

`if llama:        
            ### CHANGING STUFF HERE TO SEND EXPANDED_UNITS TO LLAMAINDEX
            print("####### entering llama###########")
            num_units = len(expanded_units)
            nodes = []
            for i in range(num_units):
                unit = expanded_units[i]
                text = unit[2]
                url = unit[4]
                url = self.filename_to_url(url)
                metadata = {"title": url}
                node = Node(text= text, id = i, metadata = metadata)
                nodes.append(node)
            print("reached line 979")
            node_num = 0

            while node_num < num_units - 1:
                    current_node = nodes[node_num]
                    next_node = nodes[node_num+1]

                   # current_node.relationships[DocumentRelationship.NEXT]=next_node.get_doc_id()
                   # next_node.relationships[DocumentRelationship.PREVIOUS]=current_node.get_doc_id()

                    current_node.relationships[NodeRelationship.NEXT] = RelatedNodeInfo(node_id=next_node.node_id)
                    next_node.relationships[NodeRelationship.PREVIOUS] = RelatedNodeInfo(node_id=current_node.node_id)

                    node_num += 1

            print("reached line 992")    
            ###index = GPTVectorStoreIndex([])

            index = VectorStoreIndex([])
            print("reached line 996")
            ###temporarily commenting out
            index.insert_nodes(nodes)
            print("reached line 998")    
            from llama_index.query_engine import RetrieverQueryEngine
            print("reached line 1004")

            ###llm = OpenAI(temperature=0, model="gpt-4", max_input_size = 4096, num_output = 256, max_overlap = 20)
            **llm = OpenAI(model='gpt-3.5-turbo', temperature=0, max_tokens=3000)**
            print("reached line 1014")

As you can see in the attached code, the new error I am getting is in the following line: llm = OpenAI(model='gpt-3.5-turbo', temperature=0, max_tokens=3000)

Screenshot from 2023-12-16 08-02-57

For some reason, whenever I pass any parameter as in my code, it throws this weird error. Now, I have run this piece of code independently of my codebase, and the OpenAI line dosent throw any error. But for some reason, it throws this wierd error and dosen't want to accept any parameter given to it.

Any leads on what could be causing this wierd bug?

logan-markewich commented 11 months ago

Did you import OpenAI from llama index?

from llama_index.llms import OpenAI