tomasonjo / blogs

Jupyter notebooks that support my graph data science blog posts at https://bratanic-tomaz.medium.com/
973 stars 288 forks source link

AttributeError: 'NoneType' object has no attribute 'nodes' #20

Open beatG123 opened 2 months ago

beatG123 commented 2 months ago

threr are some prombles when using process_response and convert_to_graph_documents: AttributeError: 'NoneType' object has no attribute 'nodes' in llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_document = llm_transformer.process_response(document) and llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_documents = llm_transformer.convert_to_graph_documents([document]) graph.add_graph_documents( graph_documents, baseEntityLabel=True, include_source=True ) who can help me?

beatG123 commented 2 months ago

blogs/llm /enhancing_rag_with_graph.ipynb

I can use the convert_to_graph_documents method without any issues, but when I call the process_response method, it will cause an error. The strange thing is that convert_to_graph_documents actually includes process_response. What could be the reason for this?

danielosagie commented 1 month ago

Not a solution but I was getting the same error, now I tried this workaround

from langchain_core.documents import Document

documents = [Document(page_content= f"{text}", metadata={"title": f"{file_path}"})]

print("") print("final document") print(documents[0].page_content)

len(text)

but I am getting a new error now AttributeError: 'tuple' object has no attribute 'page_content'

Hoping to hear back soon on what to do here

tomasonjo commented 1 month ago

@danielosagie I need more information and code

danielosagie commented 1 month ago

Here is the notebook, I am trying to use a hug face inference API for llama3 on a local server and I had to change out the PDFloader cause it wasn't working for me but everything else is the same as you wrote it. At In [136] I am finally putting the documents in to get converted but it says I don't have anything in the object. I tried breaking it down and using langchain's Document class but it hasnt worked. I feel like I am so close but I just don't know what to do, any advice would be appreciated

tomasonjo commented 1 month ago

You want to increase max new tokens... other than that, I dont know if there is anything specific for hf settings

V čet., 16. maj 2024, 20:28 je oseba Daniel Osagie @.***> napisala:

Here is the notebook https://github.com/danielosagie/GRAG/blob/965281c0ac8e803432b6ffa78f026eea7a5c09a2/GRAG_Langchain/enhancing_rag_with_graph%20(1).ipynb, I am trying to use a hug face inference API for llama3 on a local server and I had to change out the PDFloader cause it wasn't working for me but everything else is the same as you wrote it. At In [136] I am finally putting the documents in to get converted but it says I don't have anything in the object. I tried breaking it down and using langchain's Document class but it hasnt worked. I feel like I am so close but I just don't know what to do, any advice would be appreciated

— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/20#issuecomment-2115933766, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTPB25EYKTWICQOYZHDZCT3FJAVCNFSM6AAAAABGHMX7SCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJVHEZTGNZWGY . You are receiving this because you commented.Message ID: @.***>

danielosagie commented 1 month ago

Hey there I tried increasing the limit and eventually I got the same error.

I noticed that it only works on the very beginning of the document (like maybe the first chunk) and then just spits out the template until it reaches the token limit.

Your example was sort of short in the enhancing-rag notebook, so I am wondering how you would handle multiple documents or even just one really long document with multiple chunks?

tomasonjo commented 1 month ago

Exactly, you should split a long document into multiple text chunks

V pet., 17. maj 2024, 00:25 je oseba Daniel Osagie @.***> napisala:

Hey there I tried increasing the limit and eventually I got the same error.

I noticed that it only works on the very beginning of the document (like maybe the first chunk) and then just spits out the template until it reaches the token limit.

Your example was sort of short in the enhancing-rag notebook, so I am wondering how you would handle multiple documents or even just one really long document with multiple chunks?

— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/20#issuecomment-2116303849, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTNOY72QD3TDAWYEVUDZCUW4TAVCNFSM6AAAAABGHMX7SCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJWGMYDGOBUHE . You are receiving this because you commented.Message ID: @.***>