Closed maglore9900 closed 10 hours ago
Hi, the recommended approach is to use the fact value on the edge and also potentially enhance it with temporal information in the (created_at, expired_at, valid_at, invalid_at) fields of the edge.
Thank you for the quick response. That makes sense. Is there a standard method of parsing this data? its one element, seems like defining is a string, splitting on ",", then iterating through the elements is cumbersome. Just want to make sure Im not missing something really obvious. ha.
No standard method for this yet unfortunately, but we may add utilities for it in the future.
It is very user specific how you want to represent the input to your llm call so we wanted to keep the options as open as possible, that's why we return the full edge objects.
Hey! If you use the graphiti.search()
method you will get a list of EntityEdges
back. You can find the class definition in the edges.py
file. Note that it inherits from the Edges
base class so it will have those values as well. When looking at the code I noticed we are missing the type hint on the return value so I will add that to make it more clear to people reading the code.
If you use the graphiti_search()
method we return a SearchResults
object which has searchResults.nodes
, searchResults.edges
, and searchResults.communities
fields. These will return lists of these respective objects, which you can control click into to see the class definitions.
For a quick answer though: edge.fact
and edge.name
is what you will mostly care about for edges, node.name
and node.summary
for nodes.
Roger, thank you so much. Would recommend a structured object of some kind, like json. Great! this I can work with. Thank you!
@maglore9900 As @prasmussen15 mentioned, Graphiti returns objects from those method calls: SearchResults
containing:
nodes
, a list of EntityNode
objectsedges
, a list of EntityEdge
objects communities
. a list of CommunityNode
objects.These all subclass the pydantic BaseModel
and can be serialized using the model_dump_json
method.
model_dump() is exactly what I'm looking for. Thank you!
I wanted to add info for anyone that finds this later. I take the response from the graphiti call and do a result = response.model_dump(), which puts it into a json format, and then I can parse it normally.
like so
result = await graphiti._search('Who was the California Attorney General?', COMBINED_HYBRID_SEARCH_RRF)
response = result.model_dump()
I love the concept behind this project and I am exploring the docs and testing. One question that remains is when I use the example code and the response to 'Who was the California Attorney General?' is a giant block of data. I see the elements and the embedding, but if I wanted to parse this data or use this data, how would I?
Is the assumption that I pass this entire response to an LLM with my original query and then get the response?
Thanks