Open pranavbhat12 opened 1 year ago
Hi @pranavbhat12 !
This sounds like "fact-checking against the history of the conversation". Can you provide more details? Have you made any additional progress? I think the current fact-checking rail can be adapted, but need to understand in a bit more detail your use case.
Hey,
I have a conversation text where multiple user details are discussed like for an example there is discussion on who is John,where does he live,what is his age etc .Like this multiple user details are discussed,Now with my llm I am getting a structured json response with every users attribute like name,place,age,gender being retrieved.But the issue is if I ask it to retrieve details for Ram in the text where only details of John and Riya are discussed then also llm confidently gives name as Ram.So is there any way to implement this ground check that only particular attribute value which is not present in the conversation is reverted as N/A rather than whole response being reverted as N/A ?