Open improvekit opened 6 months ago
Unfortunately, LLMs like ChatGPT are still far away from perfection, quality can vary and outputs are non-deterministic. In this case, the model seems to struggle to follow the precise instructions (Otherwise, in the event that none of the possibly relevant contents is helpful for the query at all, say something like "Sorry, I am unable to answer this question".
). There are two things that you can try:
SemanticCorpus>>#newConversationForQuery:systemMessage:
. I have not put a lot of effort into systematically tuning that prompt. You can try out some variations by rephrasing the instructions related to unhelpful contents. (Some tips that I believe have worked for me in the past include explaining the reasoning behind an instruction, avoiding negations/nested sentences, clearly structuring long prompts, and using caps locks for critical instructions ("e.g., you MUST NOT")).Unfortunately, without your concrete data, I cannot investigate this by myself. If you find a better prompt, please let me know!
I will try (2)... Perhaps what I need is to use assitant API to feed the model with my own specific domain, do you have a plan to support this API in the future?
Good luck with your prompt engineering! And I would be delighted if you could keep me up to date. Generally, I don't believe this should be too hard, ChatGPT might just be misinterpreting that single instruction ... In other situations, hallucinations might be a larger problem. Sorry I can't give you the perfect prompt. :-)
So far I have not considered the Assistants API for two reasons: Limited control and pricing. Limited control because you cannot store conversations locally or define the context directly (the entire retrieval part is outsourced to the API). Pricing because the current version of the Assistants API is said to gorgeously use the context window, causing some single requests to cost more than $1. Also, this would involve synchronizing your knowledge data to the API. But if control and pricing are no issues for you, this API might suit your needs. Technically speaking, it should not be too complex to support these APIs in SemanticText. Possibly another subclass of OpenAIModel
that accesses a conversation and stores necessary metadata in it. If you'd like to submit a PR, maybe let's talk about the necessary design changes before. :-)
When I open 2 chats from 2 searchs (I inspect, with the same corpus), ChatGPT says "Sorry, I am unable to answer this question" in one of the search/chat, in the other answer ok (in the search that found the content)
Reported from Squeak6.0 of 5 July 2022 update 22104 (VM: VM: 202206021410 runner@Mac-1654183989075.local:work/opensmalltalk-vm/opensmalltalk-vm Date: Thu Jun 2 16:10:44 2022 CommitHash: c9fd365 Plugins: 202206021410 runner@Mac-1654183989075.local:work/opensmalltalk-vm/opensmalltalk-vm), for version of SemanticText last updated at 21 February 2024 11:27 am.