Closed cjellick closed 1 month ago
@iwilltry42 are you working on this?
@cjellick yep
This PR changes the instructions for the LLM to evaluate the user input: https://github.com/gptscript-ai/knowledge/pull/104 It served me very well during my testing as I briefly explained in that comment: https://github.com/gptscript-ai/knowledge/pull/104#issuecomment-2333378598 The change is available in the release https://github.com/gptscript-ai/knowledge/releases/tag/v0.4.13-dev.0
You just have to change this line to test it in the desktop app: https://github.com/gptscript-ai/desktop/blob/50bdc85359f8085d468bf234f2a2d5489b070cf6/actions/knowledge/util.ts#L2
Tested with b6c00b5330e
How did Great Wanamaker get his Business Started ?
The information that gets presented is not from the knowledge file , the output from KnowledgeInstructions was empty and KnowledgeRetrieval tool was not called.When asked the same question again - How did Great Wanamaker get his Business Started ?
- output from KnowledgeInstructions was empty , knowledgeRetrieval tool was called and it returned information
Tested with electron build - 9354f78ee
Tested the following scenario:
How did Great Wanamaker get his Business Started
?I see the response being presented from knowledge file as expected. But I see 2 calls to load context from Knowledge Retrieval Context now. One which returns empty message and other returns contents
Strack trace calls - stacktrace1000ways.json
provide citation
, there is call being made to knowledge tool to retrieve the information.stacktrace1000wayscitation.json
Reopening this issue to see if the following is expected
Regarding - https://github.com/gptscript-ai/desktop/issues/402#issuecomment-2349521501 - The issue mentioned with 2 calls to load the context from Knowledge Retrieval Context is relating to my assistant having 2 knowledge
tools . This happened when I had an assistant with knowledge file . Cleared desktop cache (rm -rf ~/Library/Application\ Support/acorn ) . Added files to the same assistant. We dont expect this to be a normal workflow. Will follow up on this in a different thread.
Also will log a different issue to track the behavior seen with citation which @iwilltry42 confirmed is expected behavior with the current way of implementation.
When testing the original issue reported in this bug i hit - https://github.com/gptscript-ai/desktop/issues/491. But i believe that the file relating to parental leave is ingested correctly in this case.
Whats our parental leave policy?
Correct information gets returned from the knowledge file. But I see multiple tool calls being made to KnowledgeRetreivalTool and loading context from Knowledge Retrieval Context ( In this case I made sure i have only 1 knowledge tool in my assistant)
Stack Trace Logs - https://acorn-io.slack.com/archives/C07FZ46QA2J/p1726254347049329?thread_ts=1726253064.690559&cid=C07FZ46QA2J
According to this thread, this is expected behavior. There's potentially more to inspect (empty input on some of the tool calls) but that may rather be tracked in another issue, if necessary.
@sangee2004 this one is done, since the multiple tool calls is expected (see Slack discussion), right?
Closing this issue.
The knowledge tool didnt work properly...the context injection came back empty-ish and so the llm had to call the knowledge tool directy when it should not have needed to. It ultimately got the exact right answer from calling the knowledge tool directly, so i know the data is in there.
Here's the context-based retrieval coming up empty.
Here's the direct call to knowledge later on that works as expected