When we ask the bot a question like "What does Alice talk about?" it could generally be resolved by taking the results of a cypher query and running them through an LLM.
MATCH (u:User {name: 'bysin'})-[:SENT]->(m:Message)
RETURN m.content AS message
The problem with this is obviously with token limitations. GPT 3.5 has a 4096 token limit, other models can go higher but cost more to use. We need to take the responses, chunk them, summarize each response, and then summarize all of the summaries. I think there are built-in mechanisms to LangChain to do this, just need to figure that all out.
When we ask the bot a question like "What does Alice talk about?" it could generally be resolved by taking the results of a cypher query and running them through an LLM.
The problem with this is obviously with token limitations. GPT 3.5 has a 4096 token limit, other models can go higher but cost more to use. We need to take the responses, chunk them, summarize each response, and then summarize all of the summaries. I think there are built-in mechanisms to LangChain to do this, just need to figure that all out.