Open rosie-at-pieces opened 5 months ago
Pretty Common question:(here are some thoughts from how I use things)
would double check the context provided is as scoped as possible(if providing a massive folder, this can (1) take a long time to parse && (2) the space is so large that is makes us guess what is important or not):: Solution: try to provide a file or folder with no more that 3-5 files
if you are getting "I dont have the context for x,y,z", I would play around with a couple things. (1) adjust the way that you are asking the question, if you ask the question this will typicaly merrit different results on the Backend, in return hopefully giving you the results that you expect. (2) Take a look at the LLM runtime you are using, I would try to play around with a couple different runtimes or models that you use to ask your questions as some will be better than other in different cases
I could probably think of a couple more, but here are a few for people to try out if they are having trouble w/ context and asking about it with in the Pieces copilot, rather than just copy/pasting information directly into the user query
Adding to number 2 of what Mark mentioned, how the prompt is worded is very important. Some general tips for how to construct a prompt that will accurately use the context are:
We are pushing an update to the vision micro models that should resolve this! We think the issue is less about the prompt and more about doing better information extraction from the screenshots. (Assuming we are talking about the wpe here from the early access feature flag)
Software
Desktop Application
Operating System
macOS
Your Pieces OS Version
9.0.1
Early Access Program
Kindly describe the bug and include as much detail as possible on what you were doing so we can reproduce the bug.
I've heard this from a couple of users - would love feedback from the community on this one. What works, what doesn't?