Open beatG123 opened 2 months ago
blogs/llm /enhancing_rag_with_graph.ipynb
I can use the convert_to_graph_documents method without any issues, but when I call the process_response method, it will cause an error. The strange thing is that convert_to_graph_documents actually includes process_response. What could be the reason for this?
Not a solution but I was getting the same error, now I tried this workaround
from langchain_core.documents import Document
documents = [Document(page_content= f"{text}", metadata={"title": f"{file_path}"})]
print("")
print("final document")
print(documents[0].page_content)
len(text)
but I am getting a new error now
AttributeError: 'tuple' object has no attribute 'page_content'
Hoping to hear back soon on what to do here
@danielosagie I need more information and code
Here is the notebook, I am trying to use a hug face inference API for llama3 on a local server and I had to change out the PDFloader cause it wasn't working for me but everything else is the same as you wrote it. At In [136] I am finally putting the documents in to get converted but it says I don't have anything in the object. I tried breaking it down and using langchain's Document class but it hasnt worked. I feel like I am so close but I just don't know what to do, any advice would be appreciated
You want to increase max new tokens... other than that, I dont know if there is anything specific for hf settings
V čet., 16. maj 2024, 20:28 je oseba Daniel Osagie @.***> napisala:
Here is the notebook https://github.com/danielosagie/GRAG/blob/965281c0ac8e803432b6ffa78f026eea7a5c09a2/GRAG_Langchain/enhancing_rag_with_graph%20(1).ipynb, I am trying to use a hug face inference API for llama3 on a local server and I had to change out the PDFloader cause it wasn't working for me but everything else is the same as you wrote it. At In [136] I am finally putting the documents in to get converted but it says I don't have anything in the object. I tried breaking it down and using langchain's Document class but it hasnt worked. I feel like I am so close but I just don't know what to do, any advice would be appreciated
— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/20#issuecomment-2115933766, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTPB25EYKTWICQOYZHDZCT3FJAVCNFSM6AAAAABGHMX7SCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJVHEZTGNZWGY . You are receiving this because you commented.Message ID: @.***>
Hey there I tried increasing the limit and eventually I got the same error.
I noticed that it only works on the very beginning of the document (like maybe the first chunk) and then just spits out the template until it reaches the token limit.
Your example was sort of short in the enhancing-rag notebook, so I am wondering how you would handle multiple documents or even just one really long document with multiple chunks?
Exactly, you should split a long document into multiple text chunks
V pet., 17. maj 2024, 00:25 je oseba Daniel Osagie @.***> napisala:
Hey there I tried increasing the limit and eventually I got the same error.
I noticed that it only works on the very beginning of the document (like maybe the first chunk) and then just spits out the template until it reaches the token limit.
Your example was sort of short in the enhancing-rag notebook, so I am wondering how you would handle multiple documents or even just one really long document with multiple chunks?
— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/20#issuecomment-2116303849, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTNOY72QD3TDAWYEVUDZCUW4TAVCNFSM6AAAAABGHMX7SCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJWGMYDGOBUHE . You are receiving this because you commented.Message ID: @.***>
threr are some prombles when using process_response and convert_to_graph_documents:
AttributeError: 'NoneType' object has no attribute 'nodes'
inllm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_document = llm_transformer.process_response(document)
andllm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_documents = llm_transformer.convert_to_graph_documents([document]) graph.add_graph_documents( graph_documents, baseEntityLabel=True, include_source=True )
who can help me?