continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
19.11k stars 1.64k forks source link

Save corrections to incorrect answers for future queries #464

Open cheuk-cheng opened 1 year ago

cheuk-cheng commented 1 year ago

Since no models can give correct answers to user queries at all times, is it possible to save "corrections" to some database that can be used when answering similar queries in the future? For example, if a user asks "Find bugs in this Python code" and then a locally running model like codellama returns an incorrect answer followed by the user pointing out the mistake and codellama confirms that it is a mistake, can this information be saved somehow so that similar questions in the future can refer to this additional information and avoid making the same mistake? Sorry, I do not have the technical knowledge regarding how models or GPT work internally.

sestinj commented 1 year ago

This isn't something we would get to in the next few weeks, however I do believe that this type of feedback will eventually be extremely important. What I could do is create an interface to give feedback, and then we could store this data locally so that once the time comes, it can be used to help the model give better responses.

Do you find that when you get a poor response, you are typically willing to spend the time to give feedback?

cheuk-cheng commented 1 year ago

Thanks Nate. I do like to give feedbacks to the A.I. after getting answers whenever I notice something does not seem right. Sometimes I may make a mistake in my follow-up queries/challenges and the A.I. will elaborate on my additional queries which I find very helpful to understand more regarding the original question and answer. I find that follow-up Q&As are important to the whole learning process instead of blindly trusting and copying/pasting generated results. I and the A.I. try to rectify identified (and agreed) mistakes. Usually I don't re-download local models again and IMHO unlike paid services such as ChatGPT, local models don't get retrained and improved, thus local models will more likely make the same mistakes in the future.

sestinj commented 1 year ago

Very cool—this is all useful feedback, and I think it boosts our confidence that this is a good path to pursue. I'll keep this issue open until we start looking into fine-tuning, and then I'll give an update.

And if we end up adding some UI even earlier for gathering feedback I'll also share

cheuk-cheng commented 1 year ago

Thanks a lot Nate. It is not an urgent feature. I think most if not all "local/offline" llama tools don't have this enhancement.

sestinj commented 10 months ago

@cheuk-cheng Recently we added the ability to "thumbs up/down" responses from the model, which will be stored locally in your ~/.continue/dev_data folder